Test Report: Docker_Linux_crio 21866

                    
                      77bc04e31513dc44a023e1d185fd1b44f1864364:2025-11-08:42249
                    
                

Test fail (38/327)

Order failed test Duration
29 TestAddons/serial/Volcano 0.24
35 TestAddons/parallel/Registry 13.55
36 TestAddons/parallel/RegistryCreds 0.47
37 TestAddons/parallel/Ingress 145.15
38 TestAddons/parallel/InspektorGadget 5.25
39 TestAddons/parallel/MetricsServer 5.32
41 TestAddons/parallel/CSI 45.52
42 TestAddons/parallel/Headlamp 2.48
43 TestAddons/parallel/CloudSpanner 5.26
44 TestAddons/parallel/LocalPath 8.1
45 TestAddons/parallel/NvidiaDevicePlugin 5.26
46 TestAddons/parallel/Yakd 6.24
47 TestAddons/parallel/AmdGpuDevicePlugin 5.28
97 TestFunctional/parallel/ServiceCmdConnect 602.83
114 TestFunctional/parallel/ImageCommands/ImageListShort 2.29
120 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.08
125 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 1.03
132 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.27
133 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.3
135 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.19
136 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.33
137 TestFunctional/parallel/ServiceCmd/DeployApp 600.55
152 TestFunctional/parallel/ServiceCmd/HTTPS 0.52
153 TestFunctional/parallel/ServiceCmd/Format 0.53
154 TestFunctional/parallel/ServiceCmd/URL 0.53
191 TestJSONOutput/pause/Command 2.14
197 TestJSONOutput/unpause/Command 1.7
293 TestPause/serial/Pause 5.71
347 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 2.19
351 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 2.23
352 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 2.35
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 2.9
371 TestStartStop/group/old-k8s-version/serial/Pause 6.27
375 TestStartStop/group/no-preload/serial/Pause 5.91
377 TestStartStop/group/embed-certs/serial/Pause 8.52
383 TestStartStop/group/default-k8s-diff-port/serial/Pause 6.29
385 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 2.09
392 TestStartStop/group/newest-cni/serial/Pause 5.57
x
+
TestAddons/serial/Volcano (0.24s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:850: skipping: crio not supported
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable volcano --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable volcano --alsologtostderr -v=1: exit status 11 (243.920788ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:00.957817   18604 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:00.958113   18604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:00.958123   18604 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:00.958127   18604 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:00.958347   18604 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:00.958588   18604 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:00.958909   18604 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:00.958924   18604 addons.go:607] checking whether the cluster is paused
	I1108 08:31:00.959004   18604 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:00.959015   18604 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:00.959379   18604 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:00.977242   18604 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:00.977334   18604 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:00.994817   18604 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:01.086903   18604 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:01.086976   18604 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:01.117039   18604 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:01.117060   18604 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:01.117064   18604 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:01.117067   18604 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:01.117070   18604 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:01.117074   18604 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:01.117077   18604 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:01.117079   18604 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:01.117082   18604 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:01.117092   18604 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:01.117095   18604 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:01.117098   18604 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:01.117101   18604 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:01.117104   18604 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:01.117106   18604 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:01.117110   18604 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:01.117112   18604 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:01.117117   18604 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:01.117119   18604 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:01.117121   18604 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:01.117124   18604 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:01.117127   18604 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:01.117129   18604 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:01.117132   18604 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:01.117134   18604 cri.go:89] found id: ""
	I1108 08:31:01.117170   18604 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:01.131707   18604 out.go:203] 
	W1108 08:31:01.133099   18604 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:01Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:01Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:01.133115   18604 out.go:285] * 
	* 
	W1108 08:31:01.136129   18604 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9bd16c244da2144137a37071fb77e06a574610a0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:01.137404   18604 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volcano addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable volcano --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/serial/Volcano (0.24s)

                                                
                                    
x
+
TestAddons/parallel/Registry (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:382: registry stabilized in 2.966449ms
addons_test.go:384: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Running
addons_test.go:384: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002991324s
addons_test.go:387: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:352: "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Running
addons_test.go:387: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003231264s
addons_test.go:392: (dbg) Run:  kubectl --context addons-758852 delete po -l run=registry-test --now
addons_test.go:397: (dbg) Run:  kubectl --context addons-758852 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:397: (dbg) Done: kubectl --context addons-758852 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.111497055s)
addons_test.go:411: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 ip
2025/11/08 08:31:23 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable registry --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable registry --alsologtostderr -v=1: exit status 11 (235.366669ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:23.300315   21483 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:23.300653   21483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:23.300666   21483 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:23.300684   21483 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:23.300889   21483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:23.301166   21483 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:23.301544   21483 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:23.301564   21483 addons.go:607] checking whether the cluster is paused
	I1108 08:31:23.301666   21483 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:23.301681   21483 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:23.302026   21483 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:23.319951   21483 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:23.320015   21483 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:23.337932   21483 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:23.429835   21483 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:23.429913   21483 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:23.458968   21483 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:23.458995   21483 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:23.459001   21483 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:23.459006   21483 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:23.459010   21483 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:23.459016   21483 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:23.459021   21483 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:23.459025   21483 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:23.459029   21483 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:23.459046   21483 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:23.459051   21483 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:23.459055   21483 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:23.459062   21483 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:23.459067   21483 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:23.459099   21483 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:23.459106   21483 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:23.459111   21483 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:23.459116   21483 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:23.459120   21483 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:23.459124   21483 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:23.459127   21483 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:23.459131   21483 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:23.459134   21483 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:23.459138   21483 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:23.459141   21483 cri.go:89] found id: ""
	I1108 08:31:23.459193   21483 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:23.473300   21483 out.go:203] 
	W1108 08:31:23.474714   21483 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:23Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:23Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:23.474736   21483 out.go:285] * 
	* 
	W1108 08:31:23.477676   21483 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_94fa7435cdb0fda2540861b9b71556c8cae5c5f1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:23.478916   21483 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable registry --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Registry (13.55s)

                                                
                                    
x
+
TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                                
=== RUN   TestAddons/parallel/RegistryCreds
=== PAUSE TestAddons/parallel/RegistryCreds

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/RegistryCreds
addons_test.go:323: registry-creds stabilized in 3.412878ms
addons_test.go:325: (dbg) Run:  out/minikube-linux-amd64 addons configure registry-creds -f ./testdata/addons_testconfig.json -p addons-758852
addons_test.go:332: (dbg) Run:  kubectl --context addons-758852 -n kube-system get secret -o yaml
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable registry-creds --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable registry-creds --alsologtostderr -v=1: exit status 11 (269.493181ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:29.036035   21973 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:29.036254   21973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:29.036270   21973 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:29.036278   21973 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:29.036599   21973 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:29.036938   21973 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:29.037485   21973 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:29.037509   21973 addons.go:607] checking whether the cluster is paused
	I1108 08:31:29.037653   21973 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:29.037674   21973 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:29.038276   21973 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:29.059962   21973 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:29.060073   21973 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:29.083311   21973 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:29.182052   21973 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:29.182133   21973 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:29.213630   21973 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:29.213655   21973 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:29.213661   21973 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:29.213665   21973 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:29.213669   21973 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:29.213673   21973 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:29.213678   21973 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:29.213683   21973 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:29.213687   21973 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:29.213695   21973 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:29.213700   21973 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:29.213705   21973 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:29.213711   21973 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:29.213716   21973 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:29.213724   21973 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:29.213737   21973 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:29.213744   21973 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:29.213750   21973 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:29.213753   21973 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:29.213757   21973 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:29.213763   21973 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:29.213766   21973 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:29.213770   21973 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:29.213773   21973 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:29.213776   21973 cri.go:89] found id: ""
	I1108 08:31:29.213821   21973 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:29.228286   21973 out.go:203] 
	W1108 08:31:29.229713   21973 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:29Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:29Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:29.229730   21973 out.go:285] * 
	* 
	W1108 08:31:29.232796   21973 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_ac42ae7bb4bac5cd909a08f6506d602b3d2ccf6c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:29.234090   21973 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable registry-creds addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable registry-creds --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/RegistryCreds (0.47s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (145.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-758852 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-758852 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-758852 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:352: "nginx" [f915039a-7452-477a-8746-c25305e49604] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx" [f915039a-7452-477a-8746-c25305e49604] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004313363s
I1108 08:31:24.687674    9369 kapi.go:150] Service nginx in namespace default found.
addons_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:264: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.779310318s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:280: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:288: (dbg) Run:  kubectl --context addons-758852 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Ingress]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-758852
helpers_test.go:243: (dbg) docker inspect addons-758852:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310",
	        "Created": "2025-11-08T08:29:20.530762203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T08:29:20.564200147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/hosts",
	        "LogPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310-json.log",
	        "Name": "/addons-758852",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-758852:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-758852",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310",
	                "LowerDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-758852",
	                "Source": "/var/lib/docker/volumes/addons-758852/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-758852",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-758852",
	                "name.minikube.sigs.k8s.io": "addons-758852",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1670eeb0c3484c8e43bd330d854fcf230f75bedd8b125682c0c7076edd32448d",
	            "SandboxKey": "/var/run/docker/netns/1670eeb0c348",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-758852": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:02:8c:7f:08:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a7899770708615c4706a5710ae8a5596af2916badb1ef0028942a781a5d4667",
	                    "EndpointID": "defc44ed795e65e28bad37881281926a9d284568cd9b46084f66c7ad5f761f25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-758852",
	                        "e8c4e7921138"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-758852 -n addons-758852
helpers_test.go:252: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-758852 logs -n 25: (1.096492003s)
helpers_test.go:260: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ --download-only -p binary-mirror-174375 --alsologtostderr --binary-mirror http://127.0.0.1:33529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-174375 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-174375                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-174375 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-758852                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-758852                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-758852 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-758852 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-758852 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable headlamp --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable nvidia-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable metrics-server --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ ssh     │ addons-758852 ssh cat /opt/local-path-provisioner/pvc-79233732-933d-46d0-b689-a8767082a39b_default_test-pvc/file1                                                                                                                                                                                                                                                                                                                                                        │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-758852 addons disable cloud-spanner --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable storage-provisioner-rancher --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                          │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ ip      │ addons-758852 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-758852 addons disable registry --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ ssh     │ addons-758852 ssh curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable inspektor-gadget --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                     │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable amd-gpu-device-plugin --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ configure registry-creds -f ./testdata/addons_testconfig.json -p addons-758852                                                                                                                                                                                                                                                                                                                                                                                           │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-758852 addons disable registry-creds --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable yakd --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                 │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable volumesnapshots --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                      │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │                     │
	│ addons  │ addons-758852 addons disable csi-hostpath-driver --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:32 UTC │                     │
	│ ip      │ addons-758852 ip                                                                                                                                                                                                                                                                                                                                                                                                                                                         │ addons-758852        │ jenkins │ v1.37.0 │ 08 Nov 25 08:33 UTC │ 08 Nov 25 08:33 UTC │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:28:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:28:56.282578   10713 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:28:56.282859   10713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:56.282869   10713 out.go:374] Setting ErrFile to fd 2...
	I1108 08:28:56.282875   10713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:56.283113   10713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:28:56.283677   10713 out.go:368] Setting JSON to false
	I1108 08:28:56.284468   10713 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":687,"bootTime":1762589849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:28:56.284555   10713 start.go:143] virtualization: kvm guest
	I1108 08:28:56.286294   10713 out.go:179] * [addons-758852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:28:56.287591   10713 notify.go:221] Checking for updates...
	I1108 08:28:56.287628   10713 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:28:56.288977   10713 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:28:56.290412   10713 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:28:56.291761   10713 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:28:56.292976   10713 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:28:56.294271   10713 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:28:56.295569   10713 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:28:56.320749   10713 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:28:56.320832   10713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:56.373071   10713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-08 08:28:56.364089471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:56.373182   10713 docker.go:319] overlay module found
	I1108 08:28:56.375685   10713 out.go:179] * Using the docker driver based on user configuration
	I1108 08:28:56.376836   10713 start.go:309] selected driver: docker
	I1108 08:28:56.376850   10713 start.go:930] validating driver "docker" against <nil>
	I1108 08:28:56.376861   10713 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:28:56.377458   10713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:56.436396   10713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-08 08:28:56.426263046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:56.436571   10713 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:28:56.436858   10713 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:28:56.438528   10713 out.go:179] * Using Docker driver with root privileges
	I1108 08:28:56.439689   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:28:56.439759   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:28:56.439772   10713 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 08:28:56.439862   10713 start.go:353] cluster config:
	{Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 08:28:56.441063   10713 out.go:179] * Starting "addons-758852" primary control-plane node in "addons-758852" cluster
	I1108 08:28:56.442192   10713 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 08:28:56.443515   10713 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 08:28:56.444612   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:28:56.444636   10713 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 08:28:56.444646   10713 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 08:28:56.444658   10713 cache.go:59] Caching tarball of preloaded images
	I1108 08:28:56.444744   10713 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 08:28:56.444754   10713 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 08:28:56.445091   10713 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json ...
	I1108 08:28:56.445132   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json: {Name:mk828d6cdb3802c624ae356a896e12f2d3ab3fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:28:56.462495   10713 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 08:28:56.462699   10713 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 08:28:56.462720   10713 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 08:28:56.462724   10713 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 08:28:56.462732   10713 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 08:28:56.462739   10713 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 08:29:09.064121   10713 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 08:29:09.064166   10713 cache.go:233] Successfully downloaded all kic artifacts
	I1108 08:29:09.064211   10713 start.go:360] acquireMachinesLock for addons-758852: {Name:mk5cdf28796b16a0304b87e414c01f4f8b67de6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 08:29:09.064356   10713 start.go:364] duration metric: took 117.39µs to acquireMachinesLock for "addons-758852"
	I1108 08:29:09.064391   10713 start.go:93] Provisioning new machine with config: &{Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:29:09.064483   10713 start.go:125] createHost starting for "" (driver="docker")
	I1108 08:29:09.066209   10713 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 08:29:09.066472   10713 start.go:159] libmachine.API.Create for "addons-758852" (driver="docker")
	I1108 08:29:09.066511   10713 client.go:173] LocalClient.Create starting
	I1108 08:29:09.066618   10713 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 08:29:09.408897   10713 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 08:29:09.680537   10713 cli_runner.go:164] Run: docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 08:29:09.697781   10713 cli_runner.go:211] docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 08:29:09.697851   10713 network_create.go:284] running [docker network inspect addons-758852] to gather additional debugging logs...
	I1108 08:29:09.697873   10713 cli_runner.go:164] Run: docker network inspect addons-758852
	W1108 08:29:09.714633   10713 cli_runner.go:211] docker network inspect addons-758852 returned with exit code 1
	I1108 08:29:09.714663   10713 network_create.go:287] error running [docker network inspect addons-758852]: docker network inspect addons-758852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-758852 not found
	I1108 08:29:09.714675   10713 network_create.go:289] output of [docker network inspect addons-758852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-758852 not found
	
	** /stderr **
	I1108 08:29:09.714780   10713 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 08:29:09.732331   10713 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bca860}
	I1108 08:29:09.732381   10713 network_create.go:124] attempt to create docker network addons-758852 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 08:29:09.732442   10713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-758852 addons-758852
	I1108 08:29:09.787148   10713 network_create.go:108] docker network addons-758852 192.168.49.0/24 created
	I1108 08:29:09.787178   10713 kic.go:121] calculated static IP "192.168.49.2" for the "addons-758852" container
	I1108 08:29:09.787248   10713 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 08:29:09.804796   10713 cli_runner.go:164] Run: docker volume create addons-758852 --label name.minikube.sigs.k8s.io=addons-758852 --label created_by.minikube.sigs.k8s.io=true
	I1108 08:29:09.823202   10713 oci.go:103] Successfully created a docker volume addons-758852
	I1108 08:29:09.823269   10713 cli_runner.go:164] Run: docker run --rm --name addons-758852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --entrypoint /usr/bin/test -v addons-758852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 08:29:16.072216   10713 cli_runner.go:217] Completed: docker run --rm --name addons-758852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --entrypoint /usr/bin/test -v addons-758852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (6.248909991s)
	I1108 08:29:16.072252   10713 oci.go:107] Successfully prepared a docker volume addons-758852
	I1108 08:29:16.072314   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:16.072339   10713 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 08:29:16.072416   10713 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 08:29:20.459775   10713 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.387319742s)
	I1108 08:29:20.459804   10713 kic.go:203] duration metric: took 4.387463054s to extract preloaded images to volume ...
	W1108 08:29:20.459890   10713 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 08:29:20.459931   10713 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 08:29:20.459975   10713 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 08:29:20.515236   10713 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-758852 --name addons-758852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-758852 --network addons-758852 --ip 192.168.49.2 --volume addons-758852:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 08:29:20.837911   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Running}}
	I1108 08:29:20.856522   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:20.875805   10713 cli_runner.go:164] Run: docker exec addons-758852 stat /var/lib/dpkg/alternatives/iptables
	I1108 08:29:20.922017   10713 oci.go:144] the created container "addons-758852" has a running status.
	I1108 08:29:20.922045   10713 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa...
	I1108 08:29:21.458987   10713 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 08:29:21.483789   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:21.502661   10713 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 08:29:21.502682   10713 kic_runner.go:114] Args: [docker exec --privileged addons-758852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 08:29:21.561054   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:21.578350   10713 machine.go:94] provisionDockerMachine start ...
	I1108 08:29:21.578446   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.594813   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.595086   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.595106   10713 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 08:29:21.720355   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758852
	
	I1108 08:29:21.720381   10713 ubuntu.go:182] provisioning hostname "addons-758852"
	I1108 08:29:21.720451   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.738880   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.739106   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.739124   10713 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-758852 && echo "addons-758852" | sudo tee /etc/hostname
	I1108 08:29:21.874553   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758852
	
	I1108 08:29:21.874640   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.891716   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.891922   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.891940   10713 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-758852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-758852/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-758852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 08:29:22.015788   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 08:29:22.015823   10713 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 08:29:22.015870   10713 ubuntu.go:190] setting up certificates
	I1108 08:29:22.015883   10713 provision.go:84] configureAuth start
	I1108 08:29:22.015930   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:22.032963   10713 provision.go:143] copyHostCerts
	I1108 08:29:22.033032   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 08:29:22.033141   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 08:29:22.033200   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 08:29:22.033322   10713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.addons-758852 san=[127.0.0.1 192.168.49.2 addons-758852 localhost minikube]
	I1108 08:29:22.606007   10713 provision.go:177] copyRemoteCerts
	I1108 08:29:22.606084   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 08:29:22.606116   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:22.624014   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:22.716425   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 08:29:22.734579   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 08:29:22.750471   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 08:29:22.767106   10713 provision.go:87] duration metric: took 751.209491ms to configureAuth
	I1108 08:29:22.767138   10713 ubuntu.go:206] setting minikube options for container-runtime
	I1108 08:29:22.767364   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:22.767491   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:22.784581   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:22.784773   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:22.784789   10713 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 08:29:23.018252   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 08:29:23.018292   10713 machine.go:97] duration metric: took 1.439908575s to provisionDockerMachine
	I1108 08:29:23.018307   10713 client.go:176] duration metric: took 13.951786614s to LocalClient.Create
	I1108 08:29:23.018333   10713 start.go:167] duration metric: took 13.951862471s to libmachine.API.Create "addons-758852"
	I1108 08:29:23.018346   10713 start.go:293] postStartSetup for "addons-758852" (driver="docker")
	I1108 08:29:23.018361   10713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 08:29:23.018426   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 08:29:23.018480   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.035655   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.130641   10713 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 08:29:23.134093   10713 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 08:29:23.134122   10713 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 08:29:23.134136   10713 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 08:29:23.134197   10713 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 08:29:23.134221   10713 start.go:296] duration metric: took 115.868811ms for postStartSetup
	I1108 08:29:23.134504   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:23.152660   10713 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json ...
	I1108 08:29:23.152951   10713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:29:23.153001   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.170754   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.260241   10713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 08:29:23.264493   10713 start.go:128] duration metric: took 14.199993259s to createHost
	I1108 08:29:23.264521   10713 start.go:83] releasing machines lock for "addons-758852", held for 14.200146889s
	I1108 08:29:23.264588   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:23.282509   10713 ssh_runner.go:195] Run: cat /version.json
	I1108 08:29:23.282551   10713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 08:29:23.282601   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.282554   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.301667   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.302211   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.443452   10713 ssh_runner.go:195] Run: systemctl --version
	I1108 08:29:23.449731   10713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 08:29:23.481969   10713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 08:29:23.486183   10713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 08:29:23.486245   10713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 08:29:23.511815   10713 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 08:29:23.511840   10713 start.go:496] detecting cgroup driver to use...
	I1108 08:29:23.511874   10713 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 08:29:23.511918   10713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 08:29:23.526914   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 08:29:23.538845   10713 docker.go:218] disabling cri-docker service (if available) ...
	I1108 08:29:23.538899   10713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 08:29:23.554183   10713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 08:29:23.570558   10713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 08:29:23.648260   10713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 08:29:23.734447   10713 docker.go:234] disabling docker service ...
	I1108 08:29:23.734496   10713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 08:29:23.751901   10713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 08:29:23.763794   10713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 08:29:23.845741   10713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 08:29:23.923576   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 08:29:23.935360   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 08:29:23.948385   10713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 08:29:23.948442   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.957973   10713 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 08:29:23.958022   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.966164   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.974230   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.982300   10713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 08:29:23.989695   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.997621   10713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:24.010406   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:24.018711   10713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 08:29:24.026504   10713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 08:29:24.026561   10713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 08:29:24.038008   10713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 08:29:24.045625   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:24.121710   10713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 08:29:24.220459   10713 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 08:29:24.220538   10713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 08:29:24.224433   10713 start.go:564] Will wait 60s for crictl version
	I1108 08:29:24.224485   10713 ssh_runner.go:195] Run: which crictl
	I1108 08:29:24.228102   10713 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 08:29:24.252596   10713 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 08:29:24.252717   10713 ssh_runner.go:195] Run: crio --version
	I1108 08:29:24.279011   10713 ssh_runner.go:195] Run: crio --version
	I1108 08:29:24.307047   10713 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 08:29:24.308262   10713 cli_runner.go:164] Run: docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 08:29:24.326419   10713 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 08:29:24.330447   10713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:29:24.340053   10713 kubeadm.go:884] updating cluster {Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 08:29:24.340168   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:24.340237   10713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:29:24.371160   10713 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 08:29:24.371179   10713 crio.go:433] Images already preloaded, skipping extraction
	I1108 08:29:24.371220   10713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:29:24.394869   10713 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 08:29:24.394891   10713 cache_images.go:86] Images are preloaded, skipping loading
	I1108 08:29:24.394899   10713 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 08:29:24.394986   10713 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-758852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 08:29:24.395056   10713 ssh_runner.go:195] Run: crio config
	I1108 08:29:24.439047   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:29:24.439072   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:29:24.439087   10713 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 08:29:24.439108   10713 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-758852 NodeName:addons-758852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 08:29:24.439217   10713 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-758852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 08:29:24.439267   10713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 08:29:24.447041   10713 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 08:29:24.447098   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 08:29:24.454648   10713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 08:29:24.466756   10713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 08:29:24.481619   10713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 08:29:24.494122   10713 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 08:29:24.497633   10713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:29:24.507686   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:24.586508   10713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:29:24.610724   10713 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852 for IP: 192.168.49.2
	I1108 08:29:24.610747   10713 certs.go:195] generating shared ca certs ...
	I1108 08:29:24.610766   10713 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.610880   10713 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 08:29:24.853057   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt ...
	I1108 08:29:24.853094   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt: {Name:mk213ab2be08fef7a40a46410e4bb3f131841b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.853295   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key ...
	I1108 08:29:24.853311   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key: {Name:mk7dd5dc5a93a882dec5e46ef4c2967f6e5aad7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.853418   10713 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 08:29:25.096361   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt ...
	I1108 08:29:25.096394   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt: {Name:mk8cf02648c02d2efd08c9f82d81d1c0a3d615a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.096580   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key ...
	I1108 08:29:25.096596   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key: {Name:mk6a9bff750f1ffb58c096df91bd477b5cd6f4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.096695   10713 certs.go:257] generating profile certs ...
	I1108 08:29:25.096781   10713 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key
	I1108 08:29:25.096800   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt with IP's: []
	I1108 08:29:25.515475   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt ...
	I1108 08:29:25.515509   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: {Name:mk9591853ee1a952a13591d356c4622190570821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.515681   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key ...
	I1108 08:29:25.515693   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key: {Name:mkcd506f9f128490c95a640fd4ed9a978dcc7b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.515762   10713 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f
	I1108 08:29:25.515779   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 08:29:25.663805   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f ...
	I1108 08:29:25.663838   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f: {Name:mk46995d4732edbc9dccbf302c071ac5e2e50a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.663997   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f ...
	I1108 08:29:25.664010   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f: {Name:mkb604a051c110e856b567bd8d8a60de60d4b1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.664111   10713 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt
	I1108 08:29:25.664196   10713 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key
	I1108 08:29:25.664259   10713 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key
	I1108 08:29:25.664276   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt with IP's: []
	I1108 08:29:25.771560   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt ...
	I1108 08:29:25.771591   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt: {Name:mk810bed9f024a88fb8db633e1bff5f363c3ec1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.771763   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key ...
	I1108 08:29:25.771776   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key: {Name:mkc12e8a3f3938a6071cf8c961543fa2701543e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.771958   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 08:29:25.771991   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 08:29:25.772014   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 08:29:25.772034   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 08:29:25.772553   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 08:29:25.790007   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 08:29:25.806857   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 08:29:25.823558   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 08:29:25.840696   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 08:29:25.857233   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 08:29:25.873610   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 08:29:25.890336   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 08:29:25.906966   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 08:29:25.925271   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 08:29:25.936989   10713 ssh_runner.go:195] Run: openssl version
	I1108 08:29:25.942736   10713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 08:29:25.953209   10713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.956667   10713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.956710   10713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.990039   10713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 08:29:25.998321   10713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 08:29:26.001850   10713 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 08:29:26.001906   10713 kubeadm.go:401] StartCluster: {Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:29:26.001978   10713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:29:26.002016   10713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:29:26.027925   10713 cri.go:89] found id: ""
	I1108 08:29:26.027993   10713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 08:29:26.035969   10713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 08:29:26.043819   10713 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 08:29:26.043880   10713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 08:29:26.052120   10713 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 08:29:26.052139   10713 kubeadm.go:158] found existing configuration files:
	
	I1108 08:29:26.052197   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 08:29:26.060096   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 08:29:26.060144   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 08:29:26.067495   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 08:29:26.074892   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 08:29:26.074948   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 08:29:26.081707   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 08:29:26.088534   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 08:29:26.088580   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 08:29:26.095205   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 08:29:26.102277   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 08:29:26.102338   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 08:29:26.109163   10713 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 08:29:26.162900   10713 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 08:29:26.216230   10713 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 08:29:36.392057   10713 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 08:29:36.392133   10713 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 08:29:36.392225   10713 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 08:29:36.392304   10713 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 08:29:36.392347   10713 kubeadm.go:319] OS: Linux
	I1108 08:29:36.392393   10713 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 08:29:36.392455   10713 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 08:29:36.392540   10713 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 08:29:36.392591   10713 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 08:29:36.392632   10713 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 08:29:36.392707   10713 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 08:29:36.392786   10713 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 08:29:36.392846   10713 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 08:29:36.392961   10713 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 08:29:36.393099   10713 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 08:29:36.393251   10713 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 08:29:36.393354   10713 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 08:29:36.395871   10713 out.go:252]   - Generating certificates and keys ...
	I1108 08:29:36.395950   10713 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 08:29:36.396028   10713 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 08:29:36.396114   10713 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 08:29:36.396174   10713 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 08:29:36.396234   10713 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 08:29:36.396326   10713 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 08:29:36.396402   10713 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 08:29:36.396568   10713 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-758852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 08:29:36.396648   10713 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 08:29:36.396770   10713 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-758852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 08:29:36.396855   10713 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 08:29:36.396956   10713 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 08:29:36.397014   10713 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 08:29:36.397075   10713 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 08:29:36.397124   10713 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 08:29:36.397179   10713 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 08:29:36.397226   10713 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 08:29:36.397303   10713 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 08:29:36.397375   10713 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 08:29:36.397462   10713 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 08:29:36.397538   10713 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 08:29:36.398863   10713 out.go:252]   - Booting up control plane ...
	I1108 08:29:36.398948   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 08:29:36.399036   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 08:29:36.399121   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 08:29:36.399238   10713 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 08:29:36.399400   10713 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 08:29:36.399572   10713 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 08:29:36.399697   10713 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 08:29:36.399772   10713 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 08:29:36.399914   10713 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 08:29:36.400073   10713 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 08:29:36.400154   10713 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000882834s
	I1108 08:29:36.400237   10713 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 08:29:36.400332   10713 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 08:29:36.400444   10713 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 08:29:36.400533   10713 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 08:29:36.400634   10713 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.154599506s
	I1108 08:29:36.400746   10713 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.597597859s
	I1108 08:29:36.400833   10713 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501231029s
	I1108 08:29:36.400923   10713 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 08:29:36.401058   10713 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 08:29:36.401110   10713 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 08:29:36.401335   10713 kubeadm.go:319] [mark-control-plane] Marking the node addons-758852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 08:29:36.401410   10713 kubeadm.go:319] [bootstrap-token] Using token: hf8a7f.2k8dlzg3ck7lp7gu
	I1108 08:29:36.402891   10713 out.go:252]   - Configuring RBAC rules ...
	I1108 08:29:36.403005   10713 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 08:29:36.403121   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 08:29:36.403315   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 08:29:36.403441   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 08:29:36.403581   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 08:29:36.403692   10713 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 08:29:36.403822   10713 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 08:29:36.403885   10713 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 08:29:36.403956   10713 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 08:29:36.403965   10713 kubeadm.go:319] 
	I1108 08:29:36.404044   10713 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 08:29:36.404052   10713 kubeadm.go:319] 
	I1108 08:29:36.404133   10713 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 08:29:36.404146   10713 kubeadm.go:319] 
	I1108 08:29:36.404185   10713 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 08:29:36.404269   10713 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 08:29:36.404357   10713 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 08:29:36.404367   10713 kubeadm.go:319] 
	I1108 08:29:36.404414   10713 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 08:29:36.404420   10713 kubeadm.go:319] 
	I1108 08:29:36.404469   10713 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 08:29:36.404479   10713 kubeadm.go:319] 
	I1108 08:29:36.404553   10713 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 08:29:36.404658   10713 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 08:29:36.404751   10713 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 08:29:36.404765   10713 kubeadm.go:319] 
	I1108 08:29:36.404876   10713 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 08:29:36.404991   10713 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 08:29:36.405001   10713 kubeadm.go:319] 
	I1108 08:29:36.405126   10713 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hf8a7f.2k8dlzg3ck7lp7gu \
	I1108 08:29:36.405240   10713 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 08:29:36.405260   10713 kubeadm.go:319] 	--control-plane 
	I1108 08:29:36.405264   10713 kubeadm.go:319] 
	I1108 08:29:36.405385   10713 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 08:29:36.405398   10713 kubeadm.go:319] 
	I1108 08:29:36.405499   10713 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hf8a7f.2k8dlzg3ck7lp7gu \
	I1108 08:29:36.405633   10713 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 08:29:36.405646   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:29:36.405655   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:29:36.407016   10713 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 08:29:36.408347   10713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 08:29:36.412850   10713 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 08:29:36.412867   10713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 08:29:36.425881   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 08:29:36.629915   10713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 08:29:36.629962   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:36.629966   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-758852 minikube.k8s.io/updated_at=2025_11_08T08_29_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=addons-758852 minikube.k8s.io/primary=true
	I1108 08:29:36.639961   10713 ops.go:34] apiserver oom_adj: -16
	I1108 08:29:36.713165   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:37.213821   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:37.714122   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:38.213643   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:38.713607   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:39.214223   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:39.713538   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:40.214195   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:40.714316   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:41.213363   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:41.285954   10713 kubeadm.go:1114] duration metric: took 4.656044772s to wait for elevateKubeSystemPrivileges
	I1108 08:29:41.285997   10713 kubeadm.go:403] duration metric: took 15.284091828s to StartCluster
	I1108 08:29:41.286020   10713 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:41.286167   10713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:29:41.286745   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:41.286972   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 08:29:41.287005   10713 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:29:41.287075   10713 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 08:29:41.287195   10713 addons.go:70] Setting yakd=true in profile "addons-758852"
	I1108 08:29:41.287219   10713 addons.go:239] Setting addon yakd=true in "addons-758852"
	I1108 08:29:41.287229   10713 addons.go:70] Setting inspektor-gadget=true in profile "addons-758852"
	I1108 08:29:41.287253   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287259   10713 addons.go:239] Setting addon inspektor-gadget=true in "addons-758852"
	I1108 08:29:41.287264   10713 addons.go:70] Setting default-storageclass=true in profile "addons-758852"
	I1108 08:29:41.287290   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:41.287310   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287326   10713 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-758852"
	I1108 08:29:41.287339   10713 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-758852"
	I1108 08:29:41.287339   10713 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-758852"
	I1108 08:29:41.287355   10713 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-758852"
	I1108 08:29:41.287358   10713 addons.go:70] Setting storage-provisioner=true in profile "addons-758852"
	I1108 08:29:41.287370   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287376   10713 addons.go:70] Setting registry-creds=true in profile "addons-758852"
	I1108 08:29:41.287389   10713 addons.go:239] Setting addon storage-provisioner=true in "addons-758852"
	I1108 08:29:41.287404   10713 addons.go:239] Setting addon registry-creds=true in "addons-758852"
	I1108 08:29:41.287417   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287430   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287716   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287857   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287869   10713 addons.go:70] Setting gcp-auth=true in profile "addons-758852"
	I1108 08:29:41.287872   10713 addons.go:70] Setting ingress=true in profile "addons-758852"
	I1108 08:29:41.287885   10713 addons.go:239] Setting addon ingress=true in "addons-758852"
	I1108 08:29:41.287887   10713 mustload.go:66] Loading cluster: addons-758852
	I1108 08:29:41.287891   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287909   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287960   10713 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-758852"
	I1108 08:29:41.287974   10713 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-758852"
	I1108 08:29:41.287996   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.288003   10713 addons.go:70] Setting registry=true in profile "addons-758852"
	I1108 08:29:41.288035   10713 addons.go:239] Setting addon registry=true in "addons-758852"
	I1108 08:29:41.288047   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:41.288060   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.288262   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288334   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288459   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288572   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287858   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.289252   10713 addons.go:70] Setting ingress-dns=true in profile "addons-758852"
	I1108 08:29:41.289271   10713 addons.go:239] Setting addon ingress-dns=true in "addons-758852"
	I1108 08:29:41.289315   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.289409   10713 addons.go:70] Setting volcano=true in profile "addons-758852"
	I1108 08:29:41.289421   10713 addons.go:239] Setting addon volcano=true in "addons-758852"
	I1108 08:29:41.289447   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.289611   10713 out.go:179] * Verifying Kubernetes components...
	I1108 08:29:41.289784   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.289811   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.290020   10713 addons.go:70] Setting cloud-spanner=true in profile "addons-758852"
	I1108 08:29:41.290043   10713 addons.go:239] Setting addon cloud-spanner=true in "addons-758852"
	I1108 08:29:41.290070   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.290220   10713 addons.go:70] Setting metrics-server=true in profile "addons-758852"
	I1108 08:29:41.290229   10713 addons.go:239] Setting addon metrics-server=true in "addons-758852"
	I1108 08:29:41.290243   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.290346   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.291118   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:41.291330   10713 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-758852"
	I1108 08:29:41.291478   10713 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-758852"
	I1108 08:29:41.291509   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.293076   10713 addons.go:70] Setting volumesnapshots=true in profile "addons-758852"
	I1108 08:29:41.293102   10713 addons.go:239] Setting addon volumesnapshots=true in "addons-758852"
	I1108 08:29:41.293126   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.293605   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.293713   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287311   10713 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-758852"
	I1108 08:29:41.287858   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.294661   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.302410   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.303308   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.353069   10713 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 08:29:41.353220   10713 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 08:29:41.354437   10713 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:29:41.354459   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 08:29:41.354521   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.354780   10713 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:29:41.354812   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 08:29:41.354866   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.364375   10713 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-758852"
	I1108 08:29:41.364433   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.365924   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.371193   10713 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 08:29:41.371901   10713 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 08:29:41.372029   10713 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 08:29:41.373430   10713 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:29:41.373450   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 08:29:41.373506   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.373734   10713 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 08:29:41.373746   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 08:29:41.373785   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.373957   10713 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:29:41.373970   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 08:29:41.374008   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	W1108 08:29:41.383725   10713 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 08:29:41.384116   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 08:29:41.384234   10713 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 08:29:41.384328   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1108 08:29:41.385519   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 08:29:41.385537   10713 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 08:29:41.385552   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.385776   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.387133   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 08:29:41.387153   10713 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 08:29:41.387202   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.387869   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:41.387936   10713 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 08:29:41.389503   10713 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 08:29:41.389549   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:41.390668   10713 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 08:29:41.390719   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 08:29:41.390771   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.390901   10713 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:29:41.390907   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 08:29:41.390939   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.395130   10713 addons.go:239] Setting addon default-storageclass=true in "addons-758852"
	I1108 08:29:41.395175   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.395649   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.400722   10713 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 08:29:41.401985   10713 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:29:41.402003   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 08:29:41.402202   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.402364   10713 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 08:29:41.404988   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 08:29:41.405007   10713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 08:29:41.405036   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 08:29:41.405232   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.412785   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 08:29:41.415749   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 08:29:41.417442   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 08:29:41.419580   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 08:29:41.421562   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 08:29:41.423033   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 08:29:41.424665   10713 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 08:29:41.426567   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 08:29:41.426770   10713 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:29:41.426918   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 08:29:41.427118   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.427813   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 08:29:41.427932   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 08:29:41.428928   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.445504   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.447611   10713 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 08:29:41.449127   10713 out.go:179]   - Using image docker.io/busybox:stable
	I1108 08:29:41.450031   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 08:29:41.451421   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.451710   10713 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:29:41.451838   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 08:29:41.451905   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.455725   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.455732   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.456120   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.459138   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.459648   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.477804   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.479353   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.495460   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.496370   10713 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 08:29:41.496387   10713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 08:29:41.496435   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.498486   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.498629   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.503749   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	W1108 08:29:41.506423   10713 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 08:29:41.506461   10713 retry.go:31] will retry after 144.57419ms: ssh: handshake failed: EOF
	I1108 08:29:41.515373   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.526656   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.527863   10713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:29:41.593933   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:29:41.620769   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 08:29:41.620799   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 08:29:41.626645   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:29:41.628799   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:29:41.640071   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 08:29:41.640096   10713 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 08:29:41.643633   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:29:41.646606   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 08:29:41.646733   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 08:29:41.654364   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:29:41.665030   10713 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 08:29:41.665123   10713 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 08:29:41.675641   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 08:29:41.675718   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 08:29:41.677855   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:29:41.696532   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 08:29:41.697158   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 08:29:41.698341   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 08:29:41.698361   10713 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 08:29:41.700023   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:29:41.712602   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:29:41.712765   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 08:29:41.712901   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 08:29:41.715913   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 08:29:41.715938   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 08:29:41.740642   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 08:29:41.740673   10713 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 08:29:41.746531   10713 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:29:41.746552   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 08:29:41.764315   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 08:29:41.764341   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 08:29:41.775903   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 08:29:41.775926   10713 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 08:29:41.797483   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:29:41.799100   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 08:29:41.820665   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:29:41.822165   10713 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:41.822232   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 08:29:41.833039   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 08:29:41.833061   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 08:29:41.833885   10713 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 08:29:41.838499   10713 node_ready.go:35] waiting up to 6m0s for node "addons-758852" to be "Ready" ...
	I1108 08:29:41.865791   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:29:41.887711   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 08:29:41.887737   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 08:29:41.895716   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:41.898237   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 08:29:41.898332   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 08:29:41.951037   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 08:29:41.951126   10713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 08:29:41.966275   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 08:29:41.966317   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 08:29:41.995714   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:29:41.995740   10713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 08:29:42.013791   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 08:29:42.013831   10713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 08:29:42.051398   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:29:42.078066   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 08:29:42.078099   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 08:29:42.131921   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 08:29:42.131944   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 08:29:42.177750   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:29:42.177859   10713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 08:29:42.211080   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:29:42.350230   10713 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-758852" context rescaled to 1 replicas
	I1108 08:29:42.790938   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.162101916s)
	I1108 08:29:42.790983   10713 addons.go:480] Verifying addon ingress=true in "addons-758852"
	I1108 08:29:42.791059   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.147397849s)
	I1108 08:29:42.791213   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.136823558s)
	I1108 08:29:42.791343   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094779249s)
	I1108 08:29:42.791591   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.11340205s)
	I1108 08:29:42.791635   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.094448856s)
	I1108 08:29:42.791698   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.091628596s)
	I1108 08:29:42.791788   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.078960033s)
	I1108 08:29:42.791841   10713 addons.go:480] Verifying addon registry=true in "addons-758852"
	I1108 08:29:42.792559   10713 out.go:179] * Verifying ingress addon...
	I1108 08:29:42.793584   10713 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-758852 service yakd-dashboard -n yakd-dashboard
	
	I1108 08:29:42.793627   10713 out.go:179] * Verifying registry addon...
	I1108 08:29:42.795510   10713 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 08:29:42.796171   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1108 08:29:42.797342   10713 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1108 08:29:42.798821   10713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 08:29:42.798840   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:42.798942   10713 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 08:29:42.798955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:43.222215   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.326453594s)
	W1108 08:29:43.222270   10713 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:29:43.222315   10713 retry.go:31] will retry after 272.143188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:29:43.222347   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.170900163s)
	I1108 08:29:43.222383   10713 addons.go:480] Verifying addon metrics-server=true in "addons-758852"
	I1108 08:29:43.222550   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011410422s)
	I1108 08:29:43.222572   10713 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-758852"
	I1108 08:29:43.224419   10713 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 08:29:43.226599   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 08:29:43.228907   10713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 08:29:43.228928   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:43.298710   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:43.299119   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:43.494897   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:43.729392   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:43.830311   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:43.830527   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:43.841640   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:44.229639   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:44.330819   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:44.331000   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:44.729630   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:44.798937   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:44.798994   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.230265   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:45.298663   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:45.298777   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.729492   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:45.829912   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:45.830001   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.971045   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.476108663s)
	I1108 08:29:46.229775   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:46.330913   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:46.331073   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:46.341311   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:46.729901   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:46.830941   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:46.831019   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:47.230019   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:47.298642   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:47.298699   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:47.730434   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:47.830944   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:47.831041   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:48.230348   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:48.298795   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:48.298890   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:48.730257   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:48.831184   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:48.831407   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:48.841537   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:48.993629   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 08:29:48.993703   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:49.012500   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:49.118098   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 08:29:49.130976   10713 addons.go:239] Setting addon gcp-auth=true in "addons-758852"
	I1108 08:29:49.131022   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:49.131502   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:49.149236   10713 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 08:29:49.149301   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:49.166615   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:49.229505   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:49.258101   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:49.259362   10713 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 08:29:49.260531   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 08:29:49.260549   10713 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 08:29:49.273999   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 08:29:49.274019   10713 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 08:29:49.286431   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:29:49.286451   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 08:29:49.298317   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:49.298892   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:49.299519   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:29:49.594965   10713 addons.go:480] Verifying addon gcp-auth=true in "addons-758852"
	I1108 08:29:49.596156   10713 out.go:179] * Verifying gcp-auth addon...
	I1108 08:29:49.597984   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 08:29:49.600352   10713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 08:29:49.600370   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:49.730112   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:49.799045   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:49.799086   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:50.101023   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:50.229682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:50.298324   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:50.298960   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:50.601507   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:50.730728   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:50.798511   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:50.798945   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:51.101077   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:51.229650   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:51.298546   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:51.298977   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:29:51.341478   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:51.601647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:51.730261   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:51.798789   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:51.798891   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.100862   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:52.229267   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:52.298888   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:52.299149   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.601026   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:52.729696   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:52.798351   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.799036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.101454   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:53.229736   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:53.298447   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:53.299015   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.601183   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:53.729493   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:53.799087   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.799155   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:53.841410   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:54.100866   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:54.229667   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:54.298307   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:54.298882   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:54.601245   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:54.729968   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:54.798743   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:54.798814   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.100299   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:55.229761   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:55.298614   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:55.299015   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.600993   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:55.729511   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:55.799088   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.799195   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:55.841624   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:56.101233   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:56.230173   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:56.299018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:56.299133   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:56.601033   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:56.729952   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:56.798491   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:56.798650   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:57.100428   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:57.229916   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:57.298323   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:57.298558   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:57.601540   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:57.730037   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:57.798803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:57.798854   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:58.100563   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:58.230052   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:58.298618   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:58.298717   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:58.341753   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:58.601886   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:58.729836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:58.798634   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:58.798745   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.100539   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:59.230261   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:59.298846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:59.300357   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.601119   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:59.729682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:59.798314   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.798824   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:00.100954   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:00.229362   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:00.299022   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:00.299086   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:00.601378   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:00.730338   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:00.799038   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:00.799056   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:00.841495   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:01.101007   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:01.229620   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:01.298585   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:01.299240   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:01.601512   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:01.729943   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:01.798631   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:01.798791   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.100986   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:02.229510   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:02.298988   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:02.299037   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.600857   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:02.729829   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:02.798740   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.799036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:02.841647   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:03.101348   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:03.229682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:03.298549   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:03.298836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:03.601229   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:03.729955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:03.798246   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:03.798400   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:04.101497   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:04.230232   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:04.298691   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:04.298772   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:04.601030   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:04.729937   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:04.798404   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:04.798453   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:04.841683   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:05.101258   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:05.229849   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:05.298488   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:05.298918   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:05.600926   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:05.729217   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:05.798690   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:05.798861   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:06.101102   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:06.229654   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:06.298335   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:06.298933   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:06.601421   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:06.730210   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:06.798676   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:06.798798   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:07.100635   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:07.228976   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:07.298340   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:07.298497   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:07.341641   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:07.601197   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:07.729634   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:07.798073   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:07.798700   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.100767   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:08.229108   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:08.298451   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.298650   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:08.600416   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:08.730075   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:08.798632   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.798773   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:09.100661   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:09.229461   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:09.299118   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:09.299238   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:09.341703   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:09.601401   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:09.729963   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:09.798800   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:09.798844   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:10.103966   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:10.229343   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:10.298844   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:10.298933   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:10.600851   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:10.729390   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:10.798901   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:10.798955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.100647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:11.229105   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:11.298689   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:11.298899   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.600998   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:11.729861   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:11.798366   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.799143   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:11.841380   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:12.101073   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:12.229625   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:12.299085   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:12.299128   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:12.601333   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:12.729770   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:12.798836   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:12.799095   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.101014   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:13.229396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:13.298757   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.298889   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:13.600712   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:13.729100   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:13.798792   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.798959   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:14.101249   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:14.230097   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:14.298783   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:14.298846   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:14.341142   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:14.601228   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:14.729652   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:14.798257   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:14.799050   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:15.101172   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:15.229737   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:15.298683   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:15.299098   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:15.601098   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:15.729774   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:15.798427   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:15.798948   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.100926   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:16.229268   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:16.298792   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.298932   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:16.341397   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:16.600936   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:16.729499   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:16.799171   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.799412   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:17.101186   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:17.229718   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:17.298454   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:17.299134   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:17.601180   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:17.729891   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:17.798344   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:17.798500   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:18.101471   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:18.230036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:18.298538   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:18.298672   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:18.600803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:18.729319   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:18.798580   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:18.798591   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:18.841850   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:19.100982   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:19.229679   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:19.298272   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:19.298884   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:19.601056   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:19.729744   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:19.798234   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:19.798954   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.101312   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:20.229939   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:20.298190   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.298360   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:20.601082   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:20.729869   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:20.798455   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.798609   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:21.101341   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:21.229690   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:21.298465   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:21.299018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:21.341548   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:21.601897   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:21.729587   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:21.799254   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:21.799259   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:22.101255   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:22.229938   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:22.301924   10713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 08:30:22.301952   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:22.303065   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:22.341407   10713 node_ready.go:49] node "addons-758852" is "Ready"
	I1108 08:30:22.341440   10713 node_ready.go:38] duration metric: took 40.502908626s for node "addons-758852" to be "Ready" ...
	I1108 08:30:22.341457   10713 api_server.go:52] waiting for apiserver process to appear ...
	I1108 08:30:22.341511   10713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:30:22.358201   10713 api_server.go:72] duration metric: took 41.0711634s to wait for apiserver process to appear ...
	I1108 08:30:22.358229   10713 api_server.go:88] waiting for apiserver healthz status ...
	I1108 08:30:22.358247   10713 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 08:30:22.362520   10713 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 08:30:22.363313   10713 api_server.go:141] control plane version: v1.34.1
	I1108 08:30:22.363336   10713 api_server.go:131] duration metric: took 5.101345ms to wait for apiserver health ...
	I1108 08:30:22.363344   10713 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 08:30:22.367935   10713 system_pods.go:59] 20 kube-system pods found
	I1108 08:30:22.367963   10713 system_pods.go:61] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending
	I1108 08:30:22.367968   10713 system_pods.go:61] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending
	I1108 08:30:22.367971   10713 system_pods.go:61] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending
	I1108 08:30:22.367979   10713 system_pods.go:61] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.367984   10713 system_pods.go:61] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending
	I1108 08:30:22.367990   10713 system_pods.go:61] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.367994   10713 system_pods.go:61] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.367997   10713 system_pods.go:61] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.368000   10713 system_pods.go:61] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.368005   10713 system_pods.go:61] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.368009   10713 system_pods.go:61] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.368013   10713 system_pods.go:61] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.368019   10713 system_pods.go:61] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.368026   10713 system_pods.go:61] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending
	I1108 08:30:22.368031   10713 system_pods.go:61] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.368036   10713 system_pods.go:61] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.368041   10713 system_pods.go:61] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending
	I1108 08:30:22.368044   10713 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending
	I1108 08:30:22.368048   10713 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending
	I1108 08:30:22.368053   10713 system_pods.go:61] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.368062   10713 system_pods.go:74] duration metric: took 4.712963ms to wait for pod list to return data ...
	I1108 08:30:22.368070   10713 default_sa.go:34] waiting for default service account to be created ...
	I1108 08:30:22.369892   10713 default_sa.go:45] found service account: "default"
	I1108 08:30:22.369912   10713 default_sa.go:55] duration metric: took 1.833838ms for default service account to be created ...
	I1108 08:30:22.369920   10713 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 08:30:22.372587   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:22.372612   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending
	I1108 08:30:22.372616   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending
	I1108 08:30:22.372620   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending
	I1108 08:30:22.372626   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.372630   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending
	I1108 08:30:22.372675   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.372680   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.372686   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.372690   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.372695   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.372702   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.372707   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.372714   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.372721   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:22.372731   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.372736   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.372742   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending
	I1108 08:30:22.372748   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending
	I1108 08:30:22.372753   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending
	I1108 08:30:22.372757   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.372772   10713 retry.go:31] will retry after 294.577442ms: missing components: kube-dns
	I1108 08:30:22.601389   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:22.703638   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:22.703676   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:22.703688   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:22.703698   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 08:30:22.703706   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.703715   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 08:30:22.703721   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.703729   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.703735   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.703744   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.703758   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.703766   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.703773   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.703784   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.703797   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:22.703811   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.703823   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.703836   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:22.703848   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:22.703861   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:22.703872   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.703895   10713 retry.go:31] will retry after 317.889685ms: missing components: kube-dns
	I1108 08:30:22.741765   10713 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 08:30:22.741794   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:22.802464   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:22.802736   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:23.027025   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:23.027066   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:23.027076   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:23.027089   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 08:30:23.027097   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:23.027106   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 08:30:23.027122   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:23.027128   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:23.027138   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:23.027144   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:23.027152   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:23.027162   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:23.027168   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:23.027180   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:23.027188   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:23.027200   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:23.027209   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:23.027218   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:23.027226   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:23.027237   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:23.027243   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Running
	I1108 08:30:23.027256   10713 system_pods.go:126] duration metric: took 657.330258ms to wait for k8s-apps to be running ...
	I1108 08:30:23.027265   10713 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 08:30:23.027316   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:30:23.044042   10713 system_svc.go:56] duration metric: took 16.769177ms WaitForService to wait for kubelet
	I1108 08:30:23.044074   10713 kubeadm.go:587] duration metric: took 41.757039106s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:30:23.044094   10713 node_conditions.go:102] verifying NodePressure condition ...
	I1108 08:30:23.046731   10713 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 08:30:23.046764   10713 node_conditions.go:123] node cpu capacity is 8
	I1108 08:30:23.046781   10713 node_conditions.go:105] duration metric: took 2.678855ms to run NodePressure ...
	I1108 08:30:23.046792   10713 start.go:242] waiting for startup goroutines ...
	I1108 08:30:23.101647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:23.230733   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:23.298743   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:23.299213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:23.600995   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:23.730250   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:23.830972   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:23.831082   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.101333   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:24.230137   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:24.298739   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.298796   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:24.601896   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:24.730163   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:24.830726   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.830746   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.101576   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:25.230839   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:25.298816   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:25.299410   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.600981   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:25.730166   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:25.799328   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.799449   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.101797   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:26.229984   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:26.299028   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:26.299149   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.600836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:26.729870   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:26.798804   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.799077   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:27.102047   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:27.230810   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:27.300249   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:27.302650   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:27.602968   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:27.730972   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:27.799356   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:27.799396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.101169   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:28.230019   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:28.298760   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.298837   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:28.601917   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:28.730176   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:28.799582   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.799612   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.100644   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:29.229734   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:29.298493   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.298877   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:29.601059   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:29.729846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:29.798724   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.799087   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:30.101828   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:30.230070   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:30.298595   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:30.298796   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:30.601614   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:30.731037   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:30.799322   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:30.799335   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.101104   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:31.229841   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:31.298663   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:31.299258   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.602030   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:31.730443   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:31.798836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.798904   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.101790   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:32.230272   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:32.299521   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.299668   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:32.601803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:32.730178   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:32.798852   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.801372   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.101589   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:33.230665   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:33.299154   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.299440   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.601013   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:33.730152   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:33.799139   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.799199   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.101153   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:34.230704   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.365150   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.365345   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:34.601035   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:34.730223   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.798521   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.799044   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:35.101620   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:35.230014   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.300126   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:35.300569   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.601112   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:35.729748   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.800353   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.800422   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.102074   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.229864   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.299040   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:36.299117   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.600965   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.729894   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.798875   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.799023   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:37.100834   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.229707   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.299292   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:37.299429   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.601545   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.729857   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.798878   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.799729   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.102742   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.230804   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.298376   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:38.298973   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.601081   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.730400   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.800154   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:38.800364   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.112218   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.231213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.298993   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.299041   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:39.600927   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.730533   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.799696   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.799888   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:40.101797   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.230088   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.298945   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.299110   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:40.602122   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.730623   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.799332   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.799373   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.100873   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.229998   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.298478   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:41.298520   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.601447   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.730856   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.798735   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.799156   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.210752   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.229220   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.298846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.298955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:42.600754   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.729598   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.799196   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.799228   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.102061   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.230017   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.298881   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.299102   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.601738   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.729774   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.831115   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.831165   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.101314   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.230422   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.299293   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.299515   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.601624   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.730370   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.798995   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.799053   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.100619   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.230325   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.298955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:45.299031   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.604652   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.730858   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.798600   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.799263   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.101813   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.229955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.298516   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.298887   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.603599   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.731385   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.799846   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.799961   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:47.102734   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.230750   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.299407   10713 kapi.go:107] duration metric: took 1m4.503232429s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 08:30:47.299514   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:47.601573   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.730705   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.799800   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.101844   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.230097   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.298833   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.601823   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.729938   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.798790   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.102356   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.229878   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.298430   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.601213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.730502   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.799326   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.100770   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.229947   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.298502   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.601655   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.731619   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.834482   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:51.102786   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.231234   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:51.299176   10713 kapi.go:107] duration metric: took 1m8.503662851s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 08:30:51.649312   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.730405   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.101809   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.229575   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.601218   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.730571   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.101737   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.229731   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.601251   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.730667   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.102018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.230244   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.602184   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.730651   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.101086   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.230401   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.600628   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.729238   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.101722   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.230071   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.601204   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.730396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.169811   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.272523   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.601608   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.731063   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.101459   10713 kapi.go:107] duration metric: took 1m8.50347504s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 08:30:58.103241   10713 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-758852 cluster.
	I1108 08:30:58.104479   10713 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 08:30:58.105670   10713 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 08:30:58.230453   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.729231   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:59.230736   10713 kapi.go:107] duration metric: took 1m16.004136907s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 08:30:59.232401   10713 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, registry-creds, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1108 08:30:59.233533   10713 addons.go:515] duration metric: took 1m17.946466312s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner registry-creds yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1108 08:30:59.233573   10713 start.go:247] waiting for cluster config update ...
	I1108 08:30:59.233601   10713 start.go:256] writing updated cluster config ...
	I1108 08:30:59.233863   10713 ssh_runner.go:195] Run: rm -f paused
	I1108 08:30:59.237745   10713 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:30:59.240646   10713 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6cwbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.244537   10713 pod_ready.go:94] pod "coredns-66bc5c9577-6cwbz" is "Ready"
	I1108 08:30:59.244559   10713 pod_ready.go:86] duration metric: took 3.893202ms for pod "coredns-66bc5c9577-6cwbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.246209   10713 pod_ready.go:83] waiting for pod "etcd-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.249376   10713 pod_ready.go:94] pod "etcd-addons-758852" is "Ready"
	I1108 08:30:59.249397   10713 pod_ready.go:86] duration metric: took 3.169952ms for pod "etcd-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.251010   10713 pod_ready.go:83] waiting for pod "kube-apiserver-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.254134   10713 pod_ready.go:94] pod "kube-apiserver-addons-758852" is "Ready"
	I1108 08:30:59.254150   10713 pod_ready.go:86] duration metric: took 3.119361ms for pod "kube-apiserver-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.255802   10713 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.641318   10713 pod_ready.go:94] pod "kube-controller-manager-addons-758852" is "Ready"
	I1108 08:30:59.641357   10713 pod_ready.go:86] duration metric: took 385.535714ms for pod "kube-controller-manager-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.840706   10713 pod_ready.go:83] waiting for pod "kube-proxy-fkvsn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.241029   10713 pod_ready.go:94] pod "kube-proxy-fkvsn" is "Ready"
	I1108 08:31:00.241056   10713 pod_ready.go:86] duration metric: took 400.324804ms for pod "kube-proxy-fkvsn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.441893   10713 pod_ready.go:83] waiting for pod "kube-scheduler-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.841234   10713 pod_ready.go:94] pod "kube-scheduler-addons-758852" is "Ready"
	I1108 08:31:00.841263   10713 pod_ready.go:86] duration metric: took 399.34376ms for pod "kube-scheduler-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.841276   10713 pod_ready.go:40] duration metric: took 1.603501971s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:31:00.884824   10713 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 08:31:00.886921   10713 out.go:179] * Done! kubectl is now configured to use "addons-758852" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.894129711Z" level=info msg="Running pod sandbox: default/hello-world-app-5d498dc89-6gqmw/POD" id=3c3dfae6-ce29-43eb-9488-592dcfcb3bb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.894233637Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.901667881Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6gqmw Namespace:default ID:af186e1af8182f5fc807e08cf8050ef83c433280f027411cf7238fc24be82c38 UID:997ea146-4e64-43f9-a1b2-998baa7b390c NetNS:/var/run/netns/e3c52ae0-8729-4e41-835c-1676c221339d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002992d0}] Aliases:map[]}"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.901708946Z" level=info msg="Adding pod default_hello-world-app-5d498dc89-6gqmw to CNI network \"kindnet\" (type=ptp)"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.913705647Z" level=info msg="Got pod network &{Name:hello-world-app-5d498dc89-6gqmw Namespace:default ID:af186e1af8182f5fc807e08cf8050ef83c433280f027411cf7238fc24be82c38 UID:997ea146-4e64-43f9-a1b2-998baa7b390c NetNS:/var/run/netns/e3c52ae0-8729-4e41-835c-1676c221339d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0002992d0}] Aliases:map[]}"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.913844212Z" level=info msg="Checking pod default_hello-world-app-5d498dc89-6gqmw for CNI network kindnet (type=ptp)"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.91471963Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.915670469Z" level=info msg="Ran pod sandbox af186e1af8182f5fc807e08cf8050ef83c433280f027411cf7238fc24be82c38 with infra container: default/hello-world-app-5d498dc89-6gqmw/POD" id=3c3dfae6-ce29-43eb-9488-592dcfcb3bb5 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.916832408Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=eda1d213-0f9d-423f-aeb6-4eb3088720e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.916942134Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=eda1d213-0f9d-423f-aeb6-4eb3088720e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.916976291Z" level=info msg="Neither image nor artfiact docker.io/kicbase/echo-server:1.0 found" id=eda1d213-0f9d-423f-aeb6-4eb3088720e2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.917641107Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=137dbb64-2775-46ec-ba44-7900e9c1f27b name=/runtime.v1.ImageService/PullImage
	Nov 08 08:33:37 addons-758852 crio[773]: time="2025-11-08T08:33:37.922571931Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.266205729Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86" id=137dbb64-2775-46ec-ba44-7900e9c1f27b name=/runtime.v1.ImageService/PullImage
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.266782735Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=d8872406-36e2-478d-82ac-3647f9cc31db name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.268361882Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=cc90c6ea-386f-4f41-a99a-0a402001fbc0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.271809658Z" level=info msg="Creating container: default/hello-world-app-5d498dc89-6gqmw/hello-world-app" id=69a32792-a743-4c4e-be61-fa4121a9787c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.271938658Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.278411417Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.278630958Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/2b381de4ae81489d7bed8a289550297e757a6101b80bdf73fa5742222a0d0f0b/merged/etc/passwd: no such file or directory"
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.278666283Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/2b381de4ae81489d7bed8a289550297e757a6101b80bdf73fa5742222a0d0f0b/merged/etc/group: no such file or directory"
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.278949042Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.315756872Z" level=info msg="Created container 2429f61cf6b10ed351af41c0246ecc25b82b51dcf7126a25d902c920530153ae: default/hello-world-app-5d498dc89-6gqmw/hello-world-app" id=69a32792-a743-4c4e-be61-fa4121a9787c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.316493929Z" level=info msg="Starting container: 2429f61cf6b10ed351af41c0246ecc25b82b51dcf7126a25d902c920530153ae" id=edd5395c-729b-4ddb-bd35-fc28c0f89c66 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 08:33:38 addons-758852 crio[773]: time="2025-11-08T08:33:38.31843507Z" level=info msg="Started container" PID=9616 containerID=2429f61cf6b10ed351af41c0246ecc25b82b51dcf7126a25d902c920530153ae description=default/hello-world-app-5d498dc89-6gqmw/hello-world-app id=edd5395c-729b-4ddb-bd35-fc28c0f89c66 name=/runtime.v1.RuntimeService/StartContainer sandboxID=af186e1af8182f5fc807e08cf8050ef83c433280f027411cf7238fc24be82c38
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED                  STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	2429f61cf6b10       docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86                                        Less than a second ago   Running             hello-world-app                          0                   af186e1af8182       hello-world-app-5d498dc89-6gqmw             default
	f34813a9d0591       docker.io/upmcenterprises/registry-creds@sha256:93a633d4f2b76a1c66bf19c664dbddc56093a543de6d54320f19f585ccd7d605                             About a minute ago       Running             registry-creds                           0                   7b93562b9fcfb       registry-creds-764b6fb674-rjbxd             kube-system
	bc6559957fa10       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                                              2 minutes ago            Running             nginx                                    0                   55ed556ab9ac3       nginx                                       default
	512aa15697e55       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          2 minutes ago            Running             busybox                                  0                   de20eb6ae573a       busybox                                     default
	f34be8782c294       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          2 minutes ago            Running             csi-snapshotter                          0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	ad24fc3016e0b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 2 minutes ago            Running             gcp-auth                                 0                   1c457cc7e176a       gcp-auth-78565c9fb4-99tsv                   gcp-auth
	66198912dbb4c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          2 minutes ago            Running             csi-provisioner                          0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	ef0ec581e5d71       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            2 minutes ago            Running             liveness-probe                           0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	83841cdc49661       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           2 minutes ago            Running             hostpath                                 0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	5204be461b8fb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            2 minutes ago            Running             gadget                                   0                   c3eb4b38ebd8c       gadget-jb2ln                                gadget
	f340f0145eb9b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                2 minutes ago            Running             node-driver-registrar                    0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	b0fd0c2b4b9a8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             2 minutes ago            Running             controller                               0                   6a0888bf2f3a7       ingress-nginx-controller-675c5ddd98-qd9l6   ingress-nginx
	10f4c3a3e2558       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              2 minutes ago            Running             registry-proxy                           0                   d1aec8411ff96       registry-proxy-j697c                        kube-system
	db7058dc33833       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   2 minutes ago            Running             csi-external-health-monitor-controller   0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	1aaad9983441a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     2 minutes ago            Running             amd-gpu-device-plugin                    0                   1ad6a7975c94b       amd-gpu-device-plugin-fgsj6                 kube-system
	8aabc952ff686       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   8645eeba25397       snapshot-controller-7d9fbc56b8-vkhw9        kube-system
	07cf5a2c38f59       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              2 minutes ago            Running             csi-resizer                              0                   abbc1bc3c0751       csi-hostpath-resizer-0                      kube-system
	db1083da29dce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      2 minutes ago            Running             volume-snapshot-controller               0                   b4566a4fb863c       snapshot-controller-7d9fbc56b8-8dlk9        kube-system
	b2bfae5b5011c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             2 minutes ago            Running             csi-attacher                             0                   ecfa30768cd66       csi-hostpath-attacher-0                     kube-system
	88464ad8c8a6f       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     2 minutes ago            Running             nvidia-device-plugin-ctr                 0                   635e2616172ae       nvidia-device-plugin-daemonset-tzbp6        kube-system
	b19a1caa72578       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              patch                                    0                   e9df9fb0dde82       ingress-nginx-admission-patch-49bbt         ingress-nginx
	b58739220d7fd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   3 minutes ago            Exited              create                                   0                   c4fb5dd85b31a       ingress-nginx-admission-create-t2bkq        ingress-nginx
	6df5c42a3809d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           3 minutes ago            Running             registry                                 0                   d1d34eec0eb2e       registry-6b586f9694-8mkgh                   kube-system
	9ede2f18a3c3e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              3 minutes ago            Running             yakd                                     0                   293c8645f4c7a       yakd-dashboard-5ff678cb9-v2brq              yakd-dashboard
	f542c8d2df432       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             3 minutes ago            Running             local-path-provisioner                   0                   a96b103013a08       local-path-provisioner-648f6765c9-6h2gs     local-path-storage
	f8285831ae530       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               3 minutes ago            Running             minikube-ingress-dns                     0                   7f0d658d7cce5       kube-ingress-dns-minikube                   kube-system
	c5530737e9c19       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               3 minutes ago            Running             cloud-spanner-emulator                   0                   d06fae3dfc55f       cloud-spanner-emulator-6f9fcf858b-j98cr     default
	af0574068f104       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        3 minutes ago            Running             metrics-server                           0                   8159f2ec93496       metrics-server-85b7d694d7-g65zk             kube-system
	a616ef6928972       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             3 minutes ago            Running             coredns                                  0                   3faffebd8a2fe       coredns-66bc5c9577-6cwbz                    kube-system
	76b41f4794cf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             3 minutes ago            Running             storage-provisioner                      0                   883e216febd78       storage-provisioner                         kube-system
	10b7c804477d9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             3 minutes ago            Running             kindnet-cni                              0                   4b3609e12c475       kindnet-6qtgf                               kube-system
	f2b09aff0e553       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             3 minutes ago            Running             kube-proxy                               0                   d63870490ba4a       kube-proxy-fkvsn                            kube-system
	e08d383ff6705       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             4 minutes ago            Running             kube-apiserver                           0                   a68527097af7c       kube-apiserver-addons-758852                kube-system
	8e136e1e55dba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             4 minutes ago            Running             kube-controller-manager                  0                   5bae20e97ddbc       kube-controller-manager-addons-758852       kube-system
	ee1613ab5f8f0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             4 minutes ago            Running             etcd                                     0                   f8fad627cf760       etcd-addons-758852                          kube-system
	61e01b287696c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             4 minutes ago            Running             kube-scheduler                           0                   0e512812e48d1       kube-scheduler-addons-758852                kube-system
	
	
	==> coredns [a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c] <==
	[INFO] 10.244.0.22:33291 - 10676 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00705789s
	[INFO] 10.244.0.22:43067 - 5658 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004801683s
	[INFO] 10.244.0.22:59546 - 63091 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006093941s
	[INFO] 10.244.0.22:36763 - 42697 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004409635s
	[INFO] 10.244.0.22:38420 - 57096 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004921924s
	[INFO] 10.244.0.22:45256 - 31406 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001965851s
	[INFO] 10.244.0.22:50115 - 36738 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002108482s
	[INFO] 10.244.0.27:47708 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000329815s
	[INFO] 10.244.0.27:37567 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000209384s
	[INFO] 10.244.0.31:53227 - 55641 "AAAA IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.00023616s
	[INFO] 10.244.0.31:50255 - 35321 "A IN accounts.google.com.kube-system.svc.cluster.local. udp 67 false 512" NXDOMAIN qr,aa,rd 160 0.000299103s
	[INFO] 10.244.0.31:51600 - 31881 "A IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.00011501s
	[INFO] 10.244.0.31:48185 - 36 "AAAA IN accounts.google.com.svc.cluster.local. udp 55 false 512" NXDOMAIN qr,aa,rd 148 0.000142839s
	[INFO] 10.244.0.31:53916 - 9541 "A IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000105645s
	[INFO] 10.244.0.31:60060 - 31277 "AAAA IN accounts.google.com.cluster.local. udp 51 false 512" NXDOMAIN qr,aa,rd 144 0.000146069s
	[INFO] 10.244.0.31:48512 - 64424 "AAAA IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.002902955s
	[INFO] 10.244.0.31:45168 - 34065 "A IN accounts.google.com.local. udp 43 false 512" NXDOMAIN qr,rd,ra 43 0.004553139s
	[INFO] 10.244.0.31:48808 - 21521 "AAAA IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.004810083s
	[INFO] 10.244.0.31:43005 - 23234 "A IN accounts.google.com.us-east4-a.c.k8s-minikube.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 179 0.005187817s
	[INFO] 10.244.0.31:55619 - 62000 "AAAA IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.003519174s
	[INFO] 10.244.0.31:38510 - 31033 "A IN accounts.google.com.c.k8s-minikube.internal. udp 61 false 512" NXDOMAIN qr,rd,ra 166 0.005531703s
	[INFO] 10.244.0.31:53367 - 13441 "AAAA IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004190688s
	[INFO] 10.244.0.31:33191 - 11675 "A IN accounts.google.com.google.internal. udp 53 false 512" NXDOMAIN qr,rd,ra 158 0.004329031s
	[INFO] 10.244.0.31:42749 - 18037 "AAAA IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 84 0.001898016s
	[INFO] 10.244.0.31:56543 - 27874 "A IN accounts.google.com. udp 37 false 512" NOERROR qr,rd,ra 72 0.002025446s
	
	
	==> describe nodes <==
	Name:               addons-758852
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-758852
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=addons-758852
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T08_29_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-758852
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-758852"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 08:29:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-758852
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 08:33:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 08:33:20 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 08:33:20 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 08:33:20 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 08:33:20 +0000   Sat, 08 Nov 2025 08:30:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-758852
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7d4bd929-f477-47c1-b3ca-97cfa03ee98a
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (29 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m38s
	  default                     cloud-spanner-emulator-6f9fcf858b-j98cr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  default                     hello-world-app-5d498dc89-6gqmw              0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m24s
	  gadget                      gadget-jb2ln                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  gcp-auth                    gcp-auth-78565c9fb4-99tsv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m50s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-qd9l6    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         3m57s
	  kube-system                 amd-gpu-device-plugin-fgsj6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  kube-system                 coredns-66bc5c9577-6cwbz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     3m58s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 csi-hostpathplugin-rtgg7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  kube-system                 etcd-addons-758852                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         4m4s
	  kube-system                 kindnet-6qtgf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      3m59s
	  kube-system                 kube-apiserver-addons-758852                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-controller-manager-addons-758852        200m (2%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 kube-proxy-fkvsn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m59s
	  kube-system                 kube-scheduler-addons-758852                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         4m4s
	  kube-system                 metrics-server-85b7d694d7-g65zk              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         3m57s
	  kube-system                 nvidia-device-plugin-daemonset-tzbp6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  kube-system                 registry-6b586f9694-8mkgh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 registry-creds-764b6fb674-rjbxd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  kube-system                 registry-proxy-j697c                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m17s
	  kube-system                 snapshot-controller-7d9fbc56b8-8dlk9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 snapshot-controller-7d9fbc56b8-vkhw9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m56s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  local-path-storage          local-path-provisioner-648f6765c9-6h2gs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m57s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v2brq               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 3m57s  kube-proxy       
	  Normal  Starting                 4m4s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m4s   kubelet          Node addons-758852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m4s   kubelet          Node addons-758852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m4s   kubelet          Node addons-758852 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m59s  node-controller  Node addons-758852 event: Registered Node addons-758852 in Controller
	  Normal  NodeReady                3m17s  kubelet          Node addons-758852 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.084884] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.205659] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 8 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.054730] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +2.047820] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +4.031573] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +8.127109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[Nov 8 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	
	
	==> etcd [ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268] <==
	{"level":"warn","ts":"2025-11-08T08:29:32.622206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.627891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.633799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.683842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:43.715183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:43.721538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.078212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.084785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.105172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:34.363836Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.90119ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:34.363954Z","caller":"traceutil/trace.go:172","msg":"trace[984066840] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:990; }","duration":"103.044205ms","start":"2025-11-08T08:30:34.260896Z","end":"2025-11-08T08:30:34.363940Z","steps":["trace[984066840] 'range keys from in-memory index tree'  (duration: 102.863043ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:39.151960Z","caller":"traceutil/trace.go:172","msg":"trace[1019913915] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"116.50412ms","start":"2025-11-08T08:30:39.035428Z","end":"2025-11-08T08:30:39.151932Z","steps":["trace[1019913915] 'process raft request'  (duration: 116.394617ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:30:42.208263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.851125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.208354Z","caller":"traceutil/trace.go:172","msg":"trace[1773635745] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1054; }","duration":"225.957942ms","start":"2025-11-08T08:30:41.982382Z","end":"2025-11-08T08:30:42.208340Z","steps":["trace[1773635745] 'agreement among raft nodes before linearized reading'  (duration: 92.228267ms)","trace[1773635745] 'range keys from in-memory index tree'  (duration: 133.599745ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:30:42.208982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.778548ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041175706319561 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/snapshot-controller\" mod_revision:700 > success:<request_put:<key:\"/registry/deployments/kube-system/snapshot-controller\" value_size:3313 >> failure:<request_range:<key:\"/registry/deployments/kube-system/snapshot-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T08:30:42.209037Z","caller":"traceutil/trace.go:172","msg":"trace[1497930883] linearizableReadLoop","detail":"{readStateIndex:1082; appliedIndex:1081; }","duration":"134.434586ms","start":"2025-11-08T08:30:42.074593Z","end":"2025-11-08T08:30:42.209027Z","steps":["trace[1497930883] 'read index received'  (duration: 29.495µs)","trace[1497930883] 'applied index is now lower than readState.Index'  (duration: 134.404184ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T08:30:42.209068Z","caller":"traceutil/trace.go:172","msg":"trace[623641400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"244.601489ms","start":"2025-11-08T08:30:41.964445Z","end":"2025-11-08T08:30:42.209047Z","steps":["trace[623641400] 'process raft request'  (duration: 110.159929ms)","trace[623641400] 'compare'  (duration: 133.644022ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:30:42.209116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.512151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-08T08:30:42.209122Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.809522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.209155Z","caller":"traceutil/trace.go:172","msg":"trace[1929175426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1055; }","duration":"176.845827ms","start":"2025-11-08T08:30:42.032301Z","end":"2025-11-08T08:30:42.209147Z","steps":["trace[1929175426] 'agreement among raft nodes before linearized reading'  (duration: 176.781607ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:30:42.209172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.358891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.209135Z","caller":"traceutil/trace.go:172","msg":"trace[1943306984] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1055; }","duration":"184.534166ms","start":"2025-11-08T08:30:42.024595Z","end":"2025-11-08T08:30:42.209129Z","steps":["trace[1943306984] 'agreement among raft nodes before linearized reading'  (duration: 184.490792ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:42.209188Z","caller":"traceutil/trace.go:172","msg":"trace[611731499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"109.375055ms","start":"2025-11-08T08:30:42.099808Z","end":"2025-11-08T08:30:42.209183Z","steps":["trace[611731499] 'agreement among raft nodes before linearized reading'  (duration: 109.345185ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:57.168413Z","caller":"traceutil/trace.go:172","msg":"trace[2029798354] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"108.020124ms","start":"2025-11-08T08:30:57.060376Z","end":"2025-11-08T08:30:57.168396Z","steps":["trace[2029798354] 'process raft request'  (duration: 106.252492ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:57.169598Z","caller":"traceutil/trace.go:172","msg":"trace[1210832985] transaction","detail":"{read_only:false; response_revision:1178; number_of_response:1; }","duration":"105.74415ms","start":"2025-11-08T08:30:57.063841Z","end":"2025-11-08T08:30:57.169585Z","steps":["trace[1210832985] 'process raft request'  (duration: 105.667943ms)"],"step_count":1}
	
	
	==> gcp-auth [ad24fc3016e0b3eb6344f22c175b7d28a097ad9fda49713783997fcc2a9fba3f] <==
	2025/11/08 08:30:57 GCP Auth Webhook started!
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	2025/11/08 08:31:12 Ready to marshal response ...
	2025/11/08 08:31:12 Ready to write response ...
	2025/11/08 08:31:12 Ready to marshal response ...
	2025/11/08 08:31:12 Ready to write response ...
	2025/11/08 08:31:15 Ready to marshal response ...
	2025/11/08 08:31:15 Ready to write response ...
	2025/11/08 08:31:20 Ready to marshal response ...
	2025/11/08 08:31:20 Ready to write response ...
	2025/11/08 08:31:20 Ready to marshal response ...
	2025/11/08 08:31:20 Ready to write response ...
	2025/11/08 08:31:26 Ready to marshal response ...
	2025/11/08 08:31:26 Ready to write response ...
	2025/11/08 08:31:58 Ready to marshal response ...
	2025/11/08 08:31:58 Ready to write response ...
	2025/11/08 08:33:37 Ready to marshal response ...
	2025/11/08 08:33:37 Ready to write response ...
	
	
	==> kernel <==
	 08:33:39 up 16 min,  0 user,  load average: 0.25, 0.40, 0.20
	Linux addons-758852 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3] <==
	I1108 08:31:31.715001       1 main.go:301] handling current node
	I1108 08:31:41.714400       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:31:41.714431       1 main.go:301] handling current node
	I1108 08:31:51.721570       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:31:51.721608       1 main.go:301] handling current node
	I1108 08:32:01.714952       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:01.715005       1 main.go:301] handling current node
	I1108 08:32:11.715993       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:11.716071       1 main.go:301] handling current node
	I1108 08:32:21.714505       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:21.714544       1 main.go:301] handling current node
	I1108 08:32:31.715967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:31.715998       1 main.go:301] handling current node
	I1108 08:32:41.716355       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:41.716386       1 main.go:301] handling current node
	I1108 08:32:51.717329       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:32:51.717365       1 main.go:301] handling current node
	I1108 08:33:01.715242       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:33:01.715305       1 main.go:301] handling current node
	I1108 08:33:11.714837       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:33:11.714876       1 main.go:301] handling current node
	I1108 08:33:21.714502       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:33:21.714538       1 main.go:301] handling current node
	I1108 08:33:31.716360       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:33:31.716395       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34] <==
	W1108 08:30:22.247382       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.247427       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.247604       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.247637       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.265597       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.265709       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.272340       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.272379       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:25.778639       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 08:30:25.778698       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 08:30:25.779264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.780455       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.785749       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.806546       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.848173       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	I1108 08:30:25.960345       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 08:31:09.547356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38272: use of closed network connection
	E1108 08:31:09.692666       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38306: use of closed network connection
	I1108 08:31:15.485746       1 controller.go:667] quota admission added evaluator for: ingresses.networking.k8s.io
	I1108 08:31:15.672561       1 alloc.go:328] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.100.52.47"}
	I1108 08:31:35.873407       1 controller.go:667] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1108 08:33:37.656216       1 alloc.go:328] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.55.1"}
	
	
	==> kube-controller-manager [8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6] <==
	I1108 08:29:40.063352       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 08:29:40.063464       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 08:29:40.063716       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 08:29:40.063745       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 08:29:40.063769       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 08:29:40.064063       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 08:29:40.064526       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 08:29:40.066002       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 08:29:40.066046       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 08:29:40.066087       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 08:29:40.068236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:29:40.069317       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 08:29:40.069366       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:29:40.071638       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 08:29:40.076884       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 08:29:40.082142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 08:29:42.658752       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 08:30:10.072951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 08:30:10.073096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 08:30:10.073148       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 08:30:10.089440       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 08:30:10.093242       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 08:30:10.174139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:30:10.193700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 08:30:25.070269       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968] <==
	I1108 08:29:41.276526       1 server_linux.go:53] "Using iptables proxy"
	I1108 08:29:41.445177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 08:29:41.546235       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 08:29:41.548844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 08:29:41.549947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 08:29:41.594700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 08:29:41.594831       1 server_linux.go:132] "Using iptables Proxier"
	I1108 08:29:41.602471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 08:29:41.609462       1 server.go:527] "Version info" version="v1.34.1"
	I1108 08:29:41.609901       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 08:29:41.611908       1 config.go:200] "Starting service config controller"
	I1108 08:29:41.611971       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 08:29:41.612016       1 config.go:106] "Starting endpoint slice config controller"
	I1108 08:29:41.612043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 08:29:41.612075       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 08:29:41.612101       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 08:29:41.612771       1 config.go:309] "Starting node config controller"
	I1108 08:29:41.612825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 08:29:41.714457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 08:29:41.714514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 08:29:41.716328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 08:29:41.718343       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792] <==
	E1108 08:29:33.076751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 08:29:33.076806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:29:33.076886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:29:33.076911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 08:29:33.076909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 08:29:33.076930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 08:29:33.076963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 08:29:33.077010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 08:29:33.077054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:29:33.077069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:29:33.077088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 08:29:33.077166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 08:29:33.899459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:29:33.908624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 08:29:33.914488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 08:29:33.952824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:29:33.973186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:29:33.994389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 08:29:34.111036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 08:29:34.188517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:29:34.234589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 08:29:34.268672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 08:29:34.286175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 08:29:34.316274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1108 08:29:34.573262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.873191    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"task-pv-storage\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^63a9a226-bc7d-11f0-bd08-eebe11ecff1e\") pod \"e6c6eb4d-809c-4d70-a972-dab9c595aea3\" (UID: \"e6c6eb4d-809c-4d70-a972-dab9c595aea3\") "
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.873251    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e6c6eb4d-809c-4d70-a972-dab9c595aea3-gcp-creds\") pod \"e6c6eb4d-809c-4d70-a972-dab9c595aea3\" (UID: \"e6c6eb4d-809c-4d70-a972-dab9c595aea3\") "
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.873420    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e6c6eb4d-809c-4d70-a972-dab9c595aea3-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "e6c6eb4d-809c-4d70-a972-dab9c595aea3" (UID: "e6c6eb4d-809c-4d70-a972-dab9c595aea3"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGIDValue ""
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.873518    1286 reconciler_common.go:299] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/e6c6eb4d-809c-4d70-a972-dab9c595aea3-gcp-creds\") on node \"addons-758852\" DevicePath \"\""
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.875598    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e6c6eb4d-809c-4d70-a972-dab9c595aea3-kube-api-access-jw6b9" (OuterVolumeSpecName: "kube-api-access-jw6b9") pod "e6c6eb4d-809c-4d70-a972-dab9c595aea3" (UID: "e6c6eb4d-809c-4d70-a972-dab9c595aea3"). InnerVolumeSpecName "kube-api-access-jw6b9". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.876160    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/hostpath.csi.k8s.io^63a9a226-bc7d-11f0-bd08-eebe11ecff1e" (OuterVolumeSpecName: "task-pv-storage") pod "e6c6eb4d-809c-4d70-a972-dab9c595aea3" (UID: "e6c6eb4d-809c-4d70-a972-dab9c595aea3"). InnerVolumeSpecName "pvc-d394f5f0-de66-4a47-bb37-bcc807961431". PluginName "kubernetes.io/csi", VolumeGIDValue ""
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.974618    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jw6b9\" (UniqueName: \"kubernetes.io/projected/e6c6eb4d-809c-4d70-a972-dab9c595aea3-kube-api-access-jw6b9\") on node \"addons-758852\" DevicePath \"\""
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.974675    1286 reconciler_common.go:292] "operationExecutor.UnmountDevice started for volume \"pvc-d394f5f0-de66-4a47-bb37-bcc807961431\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^63a9a226-bc7d-11f0-bd08-eebe11ecff1e\") on node \"addons-758852\" "
	Nov 08 08:32:04 addons-758852 kubelet[1286]: I1108 08:32:04.978801    1286 operation_generator.go:895] UnmountDevice succeeded for volume "pvc-d394f5f0-de66-4a47-bb37-bcc807961431" (UniqueName: "kubernetes.io/csi/hostpath.csi.k8s.io^63a9a226-bc7d-11f0-bd08-eebe11ecff1e") on node "addons-758852"
	Nov 08 08:32:05 addons-758852 kubelet[1286]: I1108 08:32:05.075171    1286 reconciler_common.go:299] "Volume detached for volume \"pvc-d394f5f0-de66-4a47-bb37-bcc807961431\" (UniqueName: \"kubernetes.io/csi/hostpath.csi.k8s.io^63a9a226-bc7d-11f0-bd08-eebe11ecff1e\") on node \"addons-758852\" DevicePath \"\""
	Nov 08 08:32:05 addons-758852 kubelet[1286]: I1108 08:32:05.212839    1286 scope.go:117] "RemoveContainer" containerID="e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7"
	Nov 08 08:32:05 addons-758852 kubelet[1286]: I1108 08:32:05.222867    1286 scope.go:117] "RemoveContainer" containerID="e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7"
	Nov 08 08:32:05 addons-758852 kubelet[1286]: E1108 08:32:05.223275    1286 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7\": container with ID starting with e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7 not found: ID does not exist" containerID="e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7"
	Nov 08 08:32:05 addons-758852 kubelet[1286]: I1108 08:32:05.223332    1286 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7"} err="failed to get container status \"e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7\": rpc error: code = NotFound desc = could not find container \"e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7\": container with ID starting with e4cfdbfd27fb9c015cf11f37ad192d1648d519ba13569ae5f32f858f963339a7 not found: ID does not exist"
	Nov 08 08:32:05 addons-758852 kubelet[1286]: I1108 08:32:05.611316    1286 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e6c6eb4d-809c-4d70-a972-dab9c595aea3" path="/var/lib/kubelet/pods/e6c6eb4d-809c-4d70-a972-dab9c595aea3/volumes"
	Nov 08 08:32:06 addons-758852 kubelet[1286]: I1108 08:32:06.608419    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j697c" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:32:06 addons-758852 kubelet[1286]: I1108 08:32:06.608571    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-fgsj6" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:32:25 addons-758852 kubelet[1286]: E1108 08:32:25.263053    1286 pod_workers.go:1324] "Error syncing pod, skipping" err="unmounted volumes=[gcr-creds], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="kube-system/registry-creds-764b6fb674-rjbxd" podUID="6574dc0f-978b-434f-99a1-1452a69af882"
	Nov 08 08:32:38 addons-758852 kubelet[1286]: I1108 08:32:38.346099    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-creds-764b6fb674-rjbxd" podStartSLOduration=175.609351542 podStartE2EDuration="2m56.346081377s" podCreationTimestamp="2025-11-08 08:29:42 +0000 UTC" firstStartedPulling="2025-11-08 08:32:36.632852934 +0000 UTC m=+181.103090359" lastFinishedPulling="2025-11-08 08:32:37.369582773 +0000 UTC m=+181.839820194" observedRunningTime="2025-11-08 08:32:38.344941234 +0000 UTC m=+182.815178676" watchObservedRunningTime="2025-11-08 08:32:38.346081377 +0000 UTC m=+182.816318820"
	Nov 08 08:33:08 addons-758852 kubelet[1286]: I1108 08:33:08.608420    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-fgsj6" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:33:10 addons-758852 kubelet[1286]: I1108 08:33:10.608318    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-tzbp6" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:33:26 addons-758852 kubelet[1286]: I1108 08:33:26.607763    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j697c" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:33:37 addons-758852 kubelet[1286]: I1108 08:33:37.647130    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhnjg\" (UniqueName: \"kubernetes.io/projected/997ea146-4e64-43f9-a1b2-998baa7b390c-kube-api-access-fhnjg\") pod \"hello-world-app-5d498dc89-6gqmw\" (UID: \"997ea146-4e64-43f9-a1b2-998baa7b390c\") " pod="default/hello-world-app-5d498dc89-6gqmw"
	Nov 08 08:33:37 addons-758852 kubelet[1286]: I1108 08:33:37.647195    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/997ea146-4e64-43f9-a1b2-998baa7b390c-gcp-creds\") pod \"hello-world-app-5d498dc89-6gqmw\" (UID: \"997ea146-4e64-43f9-a1b2-998baa7b390c\") " pod="default/hello-world-app-5d498dc89-6gqmw"
	Nov 08 08:33:38 addons-758852 kubelet[1286]: I1108 08:33:38.557734    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-5d498dc89-6gqmw" podStartSLOduration=1.207300748 podStartE2EDuration="1.55771542s" podCreationTimestamp="2025-11-08 08:33:37 +0000 UTC" firstStartedPulling="2025-11-08 08:33:37.91729811 +0000 UTC m=+242.387535543" lastFinishedPulling="2025-11-08 08:33:38.267712779 +0000 UTC m=+242.737950215" observedRunningTime="2025-11-08 08:33:38.557062104 +0000 UTC m=+243.027299545" watchObservedRunningTime="2025-11-08 08:33:38.55771542 +0000 UTC m=+243.027952862"
	
	
	==> storage-provisioner [76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f] <==
	W1108 08:33:13.374980       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:15.377874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:15.382902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:17.386083       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:17.389667       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:19.392382       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:19.396081       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:21.398407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:21.403006       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:23.405623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:23.409125       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:25.412030       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:25.416651       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:27.419171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:27.423757       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:29.426733       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:29.431775       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:31.434362       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:31.437822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:33.440627       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:33.444954       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:35.447769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:35.451318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:37.453893       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:33:37.457596       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-758852 -n addons-758852
helpers_test.go:269: (dbg) Run:  kubectl --context addons-758852 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-758852 describe pod ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-758852 describe pod ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt: exit status 1 (55.230091ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2bkq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-49bbt" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-758852 describe pod ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable ingress-dns --alsologtostderr -v=1: exit status 11 (232.165958ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:33:39.978691   25094 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:33:39.978832   25094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:33:39.978841   25094 out.go:374] Setting ErrFile to fd 2...
	I1108 08:33:39.978845   25094 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:33:39.979062   25094 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:33:39.979325   25094 mustload.go:66] Loading cluster: addons-758852
	I1108 08:33:39.979631   25094 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:33:39.979643   25094 addons.go:607] checking whether the cluster is paused
	I1108 08:33:39.979718   25094 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:33:39.979728   25094 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:33:39.980087   25094 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:33:39.998016   25094 ssh_runner.go:195] Run: systemctl --version
	I1108 08:33:39.998070   25094 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:33:40.016245   25094 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:33:40.108015   25094 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:33:40.108102   25094 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:33:40.135815   25094 cri.go:89] found id: "f34813a9d05912364b1ef93d67d6ae77f69268b60d0dd0c4b733943ae4331364"
	I1108 08:33:40.135843   25094 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:33:40.135848   25094 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:33:40.135853   25094 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:33:40.135855   25094 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:33:40.135865   25094 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:33:40.135868   25094 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:33:40.135871   25094 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:33:40.135873   25094 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:33:40.135882   25094 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:33:40.135884   25094 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:33:40.135887   25094 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:33:40.135890   25094 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:33:40.135893   25094 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:33:40.135895   25094 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:33:40.135905   25094 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:33:40.135913   25094 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:33:40.135916   25094 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:33:40.135919   25094 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:33:40.135921   25094 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:33:40.135926   25094 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:33:40.135928   25094 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:33:40.135931   25094 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:33:40.135933   25094 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:33:40.135935   25094 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:33:40.135938   25094 cri.go:89] found id: ""
	I1108 08:33:40.135986   25094 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:33:40.150156   25094 out.go:203] 
	W1108 08:33:40.151389   25094 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:33:40.151411   25094 out.go:285] * 
	* 
	W1108 08:33:40.154496   25094 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_4116e8848b7c0e6a40fa9061a5ca6da2e0eb6ead_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:33:40.155657   25094 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress-dns addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable ingress-dns --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable ingress --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable ingress --alsologtostderr -v=1: exit status 11 (234.724636ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:33:40.212906   25157 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:33:40.213210   25157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:33:40.213219   25157 out.go:374] Setting ErrFile to fd 2...
	I1108 08:33:40.213224   25157 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:33:40.213470   25157 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:33:40.213763   25157 mustload.go:66] Loading cluster: addons-758852
	I1108 08:33:40.214153   25157 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:33:40.214170   25157 addons.go:607] checking whether the cluster is paused
	I1108 08:33:40.214268   25157 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:33:40.214297   25157 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:33:40.214719   25157 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:33:40.234213   25157 ssh_runner.go:195] Run: systemctl --version
	I1108 08:33:40.234289   25157 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:33:40.251161   25157 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:33:40.344017   25157 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:33:40.344105   25157 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:33:40.371124   25157 cri.go:89] found id: "f34813a9d05912364b1ef93d67d6ae77f69268b60d0dd0c4b733943ae4331364"
	I1108 08:33:40.371145   25157 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:33:40.371149   25157 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:33:40.371152   25157 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:33:40.371154   25157 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:33:40.371159   25157 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:33:40.371162   25157 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:33:40.371166   25157 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:33:40.371170   25157 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:33:40.371177   25157 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:33:40.371181   25157 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:33:40.371185   25157 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:33:40.371190   25157 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:33:40.371194   25157 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:33:40.371198   25157 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:33:40.371208   25157 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:33:40.371216   25157 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:33:40.371222   25157 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:33:40.371227   25157 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:33:40.371231   25157 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:33:40.371235   25157 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:33:40.371238   25157 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:33:40.371240   25157 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:33:40.371243   25157 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:33:40.371245   25157 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:33:40.371247   25157 cri.go:89] found id: ""
	I1108 08:33:40.371306   25157 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:33:40.384764   25157 out.go:203] 
	W1108 08:33:40.386119   25157 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:33:40Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:33:40.386141   25157 out.go:285] * 
	* 
	W1108 08:33:40.389258   25157 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_62553deefc570c97f2052ef703df7b8905a654d6_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:33:40.390535   25157 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable ingress addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable ingress --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Ingress (145.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:352: "gadget-jb2ln" [be6dd107-b565-4e1f-839a-37aeb30fe153] Running
addons_test.go:823: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.003263091s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable inspektor-gadget --alsologtostderr -v=1: exit status 11 (249.024912ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:25.506616   21671 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:25.506778   21671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:25.506788   21671 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:25.506793   21671 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:25.506989   21671 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:25.507261   21671 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:25.507654   21671 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:25.507671   21671 addons.go:607] checking whether the cluster is paused
	I1108 08:31:25.507766   21671 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:25.507782   21671 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:25.508145   21671 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:25.526307   21671 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:25.526352   21671 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:25.544228   21671 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:25.638665   21671 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:25.638769   21671 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:25.671916   21671 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:25.671937   21671 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:25.671940   21671 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:25.671943   21671 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:25.671946   21671 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:25.671949   21671 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:25.671952   21671 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:25.671954   21671 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:25.671956   21671 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:25.671960   21671 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:25.671963   21671 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:25.671965   21671 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:25.671968   21671 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:25.671970   21671 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:25.671973   21671 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:25.671993   21671 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:25.672001   21671 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:25.672004   21671 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:25.672006   21671 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:25.672009   21671 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:25.672014   21671 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:25.672016   21671 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:25.672019   21671 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:25.672021   21671 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:25.672023   21671 cri.go:89] found id: ""
	I1108 08:31:25.672064   21671 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:25.689242   21671 out.go:203] 
	W1108 08:31:25.690692   21671 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:25Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:25.690710   21671 out.go:285] * 
	* 
	W1108 08:31:25.693724   21671 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:25.695553   21671 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable inspektor-gadget --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/InspektorGadget (5.25s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.32s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:455: metrics-server stabilized in 3.212083ms
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:352: "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Running
addons_test.go:457: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.003070426s
addons_test.go:463: (dbg) Run:  kubectl --context addons-758852 top pods -n kube-system
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable metrics-server --alsologtostderr -v=1: exit status 11 (249.618663ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:15.066607   19889 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:15.066783   19889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:15.066797   19889 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:15.066803   19889 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:15.067515   19889 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:15.067802   19889 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:15.068127   19889 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:15.068142   19889 addons.go:607] checking whether the cluster is paused
	I1108 08:31:15.068223   19889 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:15.068234   19889 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:15.068635   19889 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:15.086072   19889 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:15.086124   19889 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:15.104650   19889 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:15.198122   19889 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:15.198222   19889 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:15.225433   19889 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:15.225452   19889 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:15.225457   19889 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:15.225460   19889 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:15.225463   19889 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:15.225467   19889 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:15.225472   19889 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:15.225476   19889 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:15.225479   19889 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:15.225486   19889 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:15.225490   19889 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:15.225494   19889 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:15.225499   19889 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:15.225508   19889 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:15.225512   19889 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:15.225519   19889 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:15.225521   19889 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:15.225525   19889 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:15.225527   19889 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:15.225537   19889 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:15.225542   19889 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:15.225544   19889 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:15.225547   19889 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:15.225549   19889 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:15.225551   19889 cri.go:89] found id: ""
	I1108 08:31:15.225585   19889 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:15.239247   19889 out.go:203] 
	W1108 08:31:15.240481   19889 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:15.240498   19889 out.go:285] * 
	* 
	W1108 08:31:15.243723   19889 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_9e377edc2b59264359e9c26f81b048e390fa608a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:15.245233   19889 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable metrics-server addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable metrics-server --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/MetricsServer (5.32s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.52s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1108 08:31:20.518463    9369 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1108 08:31:20.521680    9369 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1108 08:31:20.521706    9369 kapi.go:107] duration metric: took 3.257097ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:549: csi-hostpath-driver pods stabilized in 3.266762ms
addons_test.go:552: (dbg) Run:  kubectl --context addons-758852 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:562: (dbg) Run:  kubectl --context addons-758852 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:352: "task-pv-pod" [887cfb53-dcd6-45ce-8d8e-7db734926b88] Pending
helpers_test.go:352: "task-pv-pod" [887cfb53-dcd6-45ce-8d8e-7db734926b88] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:352: "task-pv-pod" [887cfb53-dcd6-45ce-8d8e-7db734926b88] Running
addons_test.go:567: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.003828557s
addons_test.go:572: (dbg) Run:  kubectl --context addons-758852 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:577: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:427: (dbg) Run:  kubectl --context addons-758852 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: (dbg) Run:  kubectl --context addons-758852 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:582: (dbg) Run:  kubectl --context addons-758852 delete pod task-pv-pod
addons_test.go:582: (dbg) Done: kubectl --context addons-758852 delete pod task-pv-pod: (1.189020979s)
addons_test.go:588: (dbg) Run:  kubectl --context addons-758852 delete pvc hpvc
addons_test.go:594: (dbg) Run:  kubectl --context addons-758852 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:604: (dbg) Run:  kubectl --context addons-758852 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:609: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:352: "task-pv-pod-restore" [e6c6eb4d-809c-4d70-a972-dab9c595aea3] Pending
helpers_test.go:352: "task-pv-pod-restore" [e6c6eb4d-809c-4d70-a972-dab9c595aea3] Running
addons_test.go:609: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 6.003436737s
addons_test.go:614: (dbg) Run:  kubectl --context addons-758852 delete pod task-pv-pod-restore
addons_test.go:618: (dbg) Run:  kubectl --context addons-758852 delete pvc hpvc-restore
addons_test.go:622: (dbg) Run:  kubectl --context addons-758852 delete volumesnapshot new-snapshot-demo
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable volumesnapshots --alsologtostderr -v=1: exit status 11 (239.112067ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:32:05.607341   22960 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:32:05.607633   22960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:32:05.607643   22960 out.go:374] Setting ErrFile to fd 2...
	I1108 08:32:05.607650   22960 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:32:05.607834   22960 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:32:05.608155   22960 mustload.go:66] Loading cluster: addons-758852
	I1108 08:32:05.608618   22960 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:32:05.608642   22960 addons.go:607] checking whether the cluster is paused
	I1108 08:32:05.608773   22960 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:32:05.608789   22960 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:32:05.609197   22960 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:32:05.627810   22960 ssh_runner.go:195] Run: systemctl --version
	I1108 08:32:05.627857   22960 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:32:05.645759   22960 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:32:05.739831   22960 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:32:05.739925   22960 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:32:05.769340   22960 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:32:05.769365   22960 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:32:05.769369   22960 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:32:05.769372   22960 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:32:05.769375   22960 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:32:05.769378   22960 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:32:05.769380   22960 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:32:05.769384   22960 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:32:05.769388   22960 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:32:05.769395   22960 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:32:05.769399   22960 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:32:05.769403   22960 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:32:05.769407   22960 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:32:05.769411   22960 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:32:05.769416   22960 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:32:05.769433   22960 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:32:05.769439   22960 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:32:05.769444   22960 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:32:05.769446   22960 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:32:05.769449   22960 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:32:05.769451   22960 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:32:05.769454   22960 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:32:05.769456   22960 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:32:05.769458   22960 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:32:05.769461   22960 cri.go:89] found id: ""
	I1108 08:32:05.769504   22960 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:32:05.783425   22960 out.go:203] 
	W1108 08:32:05.784636   22960 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:32:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:32:05.784655   22960 out.go:285] * 
	* 
	W1108 08:32:05.787692   22960 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_f6150db7515caf82d8c4c5baeba9fd21f738a7e0_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:32:05.788974   22960 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable volumesnapshots addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable volumesnapshots --alsologtostderr -v=1": exit status 11
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable csi-hostpath-driver --alsologtostderr -v=1: exit status 11 (242.395396ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:32:05.852977   23040 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:32:05.853274   23040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:32:05.853298   23040 out.go:374] Setting ErrFile to fd 2...
	I1108 08:32:05.853308   23040 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:32:05.853515   23040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:32:05.853753   23040 mustload.go:66] Loading cluster: addons-758852
	I1108 08:32:05.854093   23040 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:32:05.854110   23040 addons.go:607] checking whether the cluster is paused
	I1108 08:32:05.854213   23040 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:32:05.854226   23040 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:32:05.854599   23040 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:32:05.872576   23040 ssh_runner.go:195] Run: systemctl --version
	I1108 08:32:05.872648   23040 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:32:05.891013   23040 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:32:05.982813   23040 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:32:05.982902   23040 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:32:06.012123   23040 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:32:06.012145   23040 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:32:06.012149   23040 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:32:06.012151   23040 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:32:06.012154   23040 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:32:06.012157   23040 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:32:06.012159   23040 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:32:06.012161   23040 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:32:06.012164   23040 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:32:06.012171   23040 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:32:06.012174   23040 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:32:06.012176   23040 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:32:06.012178   23040 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:32:06.012181   23040 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:32:06.012183   23040 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:32:06.012189   23040 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:32:06.012192   23040 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:32:06.012196   23040 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:32:06.012198   23040 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:32:06.012201   23040 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:32:06.012203   23040 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:32:06.012206   23040 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:32:06.012208   23040 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:32:06.012211   23040 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:32:06.012219   23040 cri.go:89] found id: ""
	I1108 08:32:06.012258   23040 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:32:06.025857   23040 out.go:203] 
	W1108 08:32:06.027196   23040 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:32:06Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:32:06.027219   23040 out.go:285] * 
	* 
	W1108 08:32:06.030410   23040 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_913eef9b964ccef8b5b536327192b81f4aff5da9_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:32:06.031711   23040 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable csi-hostpath-driver addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable csi-hostpath-driver --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CSI (45.52s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (2.48s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-758852 --alsologtostderr -v=1
addons_test.go:808: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable headlamp -p addons-758852 --alsologtostderr -v=1: exit status 11 (238.239814ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:09.985754   18941 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:09.985926   18941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:09.985937   18941 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:09.985940   18941 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:09.986162   18941 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:09.986427   18941 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:09.986751   18941 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:09.986765   18941 addons.go:607] checking whether the cluster is paused
	I1108 08:31:09.986845   18941 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:09.986856   18941 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:09.987216   18941 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:10.005134   18941 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:10.005191   18941 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:10.022767   18941 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:10.115803   18941 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:10.115879   18941 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:10.143398   18941 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:10.143428   18941 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:10.143436   18941 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:10.143442   18941 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:10.143447   18941 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:10.143452   18941 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:10.143456   18941 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:10.143460   18941 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:10.143465   18941 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:10.143476   18941 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:10.143483   18941 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:10.143486   18941 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:10.143488   18941 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:10.143491   18941 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:10.143494   18941 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:10.143498   18941 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:10.143503   18941 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:10.143508   18941 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:10.143511   18941 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:10.143514   18941 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:10.143518   18941 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:10.143527   18941 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:10.143529   18941 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:10.143531   18941 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:10.143533   18941 cri.go:89] found id: ""
	I1108 08:31:10.143572   18941 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:10.157741   18941 out.go:203] 
	W1108 08:31:10.159028   18941 out.go:285] X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:10Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:10Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:10.159046   18941 out.go:285] * 
	* 
	W1108 08:31:10.161934   18941 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_af3b8a9ce4f102efc219f1404c9eed7a69cbf2d5_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:10.163237   18941 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:810: failed to enable headlamp addon: args: "out/minikube-linux-amd64 addons enable headlamp -p addons-758852 --alsologtostderr -v=1": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestAddons/parallel/Headlamp]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestAddons/parallel/Headlamp]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect addons-758852
helpers_test.go:243: (dbg) docker inspect addons-758852:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310",
	        "Created": "2025-11-08T08:29:20.530762203Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 11377,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T08:29:20.564200147Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/hostname",
	        "HostsPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/hosts",
	        "LogPath": "/var/lib/docker/containers/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310/e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310-json.log",
	        "Name": "/addons-758852",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-758852:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-758852",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e8c4e7921138d02c79c81abb9b82743116b8729b46d10373440e5e13091ef310",
	                "LowerDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0b6b2bbbd57e28ee1e058a99a229ca7b626de26e992c0edfe6cbbbd443cfb927/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-758852",
	                "Source": "/var/lib/docker/volumes/addons-758852/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-758852",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-758852",
	                "name.minikube.sigs.k8s.io": "addons-758852",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1670eeb0c3484c8e43bd330d854fcf230f75bedd8b125682c0c7076edd32448d",
	            "SandboxKey": "/var/run/docker/netns/1670eeb0c348",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-758852": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "92:02:8c:7f:08:9b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "2a7899770708615c4706a5710ae8a5596af2916badb1ef0028942a781a5d4667",
	                    "EndpointID": "defc44ed795e65e28bad37881281926a9d284568cd9b46084f66c7ad5f761f25",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-758852",
	                        "e8c4e7921138"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-758852 -n addons-758852
helpers_test.go:252: <<< TestAddons/parallel/Headlamp FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestAddons/parallel/Headlamp]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p addons-758852 logs -n 25: (1.09810773s)
helpers_test.go:260: TestAddons/parallel/Headlamp logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                                                                                                                                                   ARGS                                                                                                                                                                                                                                   │        PROFILE         │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-103718 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-103718   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-103718                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-103718   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-713440 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                │ download-only-713440   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                                                                                                                                                                                                                                                                                                                    │ minikube               │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-713440                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-713440   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-103718                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-103718   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-713440                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ download-only-713440   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ start   │ --download-only -p download-docker-800960 --alsologtostderr --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                                                                    │ download-docker-800960 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ -p download-docker-800960                                                                                                                                                                                                                                                                                                                                                                                                                                                │ download-docker-800960 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ start   │ --download-only -p binary-mirror-174375 --alsologtostderr --binary-mirror http://127.0.0.1:33529 --driver=docker  --container-runtime=crio                                                                                                                                                                                                                                                                                                                               │ binary-mirror-174375   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ -p binary-mirror-174375                                                                                                                                                                                                                                                                                                                                                                                                                                                  │ binary-mirror-174375   │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ addons  │ enable dashboard -p addons-758852                                                                                                                                                                                                                                                                                                                                                                                                                                        │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ addons  │ disable dashboard -p addons-758852                                                                                                                                                                                                                                                                                                                                                                                                                                       │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ start   │ -p addons-758852 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:31 UTC │
	│ addons  │ addons-758852 addons disable volcano --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                              │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ addons-758852 addons disable gcp-auth --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                             │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	│ addons  │ enable headlamp -p addons-758852 --alsologtostderr -v=1                                                                                                                                                                                                                                                                                                                                                                                                                  │ addons-758852          │ jenkins │ v1.37.0 │ 08 Nov 25 08:31 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────
───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:28:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:28:56.282578   10713 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:28:56.282859   10713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:56.282869   10713 out.go:374] Setting ErrFile to fd 2...
	I1108 08:28:56.282875   10713 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:56.283113   10713 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:28:56.283677   10713 out.go:368] Setting JSON to false
	I1108 08:28:56.284468   10713 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":687,"bootTime":1762589849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:28:56.284555   10713 start.go:143] virtualization: kvm guest
	I1108 08:28:56.286294   10713 out.go:179] * [addons-758852] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:28:56.287591   10713 notify.go:221] Checking for updates...
	I1108 08:28:56.287628   10713 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:28:56.288977   10713 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:28:56.290412   10713 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:28:56.291761   10713 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:28:56.292976   10713 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:28:56.294271   10713 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:28:56.295569   10713 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:28:56.320749   10713 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:28:56.320832   10713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:56.373071   10713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-08 08:28:56.364089471 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:56.373182   10713 docker.go:319] overlay module found
	I1108 08:28:56.375685   10713 out.go:179] * Using the docker driver based on user configuration
	I1108 08:28:56.376836   10713 start.go:309] selected driver: docker
	I1108 08:28:56.376850   10713 start.go:930] validating driver "docker" against <nil>
	I1108 08:28:56.376861   10713 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:28:56.377458   10713 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:56.436396   10713 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:50 SystemTime:2025-11-08 08:28:56.426263046 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:56.436571   10713 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:28:56.436858   10713 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:28:56.438528   10713 out.go:179] * Using Docker driver with root privileges
	I1108 08:28:56.439689   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:28:56.439759   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:28:56.439772   10713 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 08:28:56.439862   10713 start.go:353] cluster config:
	{Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs:
AutoPauseInterval:1m0s}
	I1108 08:28:56.441063   10713 out.go:179] * Starting "addons-758852" primary control-plane node in "addons-758852" cluster
	I1108 08:28:56.442192   10713 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 08:28:56.443515   10713 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 08:28:56.444612   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:28:56.444636   10713 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 08:28:56.444646   10713 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 08:28:56.444658   10713 cache.go:59] Caching tarball of preloaded images
	I1108 08:28:56.444744   10713 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 08:28:56.444754   10713 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 08:28:56.445091   10713 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json ...
	I1108 08:28:56.445132   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json: {Name:mk828d6cdb3802c624ae356a896e12f2d3ab3fb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:28:56.462495   10713 cache.go:153] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 to local cache
	I1108 08:28:56.462699   10713 image.go:65] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory
	I1108 08:28:56.462720   10713 image.go:68] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local cache directory, skipping pull
	I1108 08:28:56.462724   10713 image.go:137] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in cache, skipping pull
	I1108 08:28:56.462732   10713 cache.go:156] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 as a tarball
	I1108 08:28:56.462739   10713 cache.go:166] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from local cache
	I1108 08:29:09.064121   10713 cache.go:168] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 from cached tarball
	I1108 08:29:09.064166   10713 cache.go:233] Successfully downloaded all kic artifacts
	I1108 08:29:09.064211   10713 start.go:360] acquireMachinesLock for addons-758852: {Name:mk5cdf28796b16a0304b87e414c01f4f8b67de6f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 08:29:09.064356   10713 start.go:364] duration metric: took 117.39µs to acquireMachinesLock for "addons-758852"
	I1108 08:29:09.064391   10713 start.go:93] Provisioning new machine with config: &{Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:29:09.064483   10713 start.go:125] createHost starting for "" (driver="docker")
	I1108 08:29:09.066209   10713 out.go:252] * Creating docker container (CPUs=2, Memory=4096MB) ...
	I1108 08:29:09.066472   10713 start.go:159] libmachine.API.Create for "addons-758852" (driver="docker")
	I1108 08:29:09.066511   10713 client.go:173] LocalClient.Create starting
	I1108 08:29:09.066618   10713 main.go:143] libmachine: Creating CA: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 08:29:09.408897   10713 main.go:143] libmachine: Creating client certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 08:29:09.680537   10713 cli_runner.go:164] Run: docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 08:29:09.697781   10713 cli_runner.go:211] docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 08:29:09.697851   10713 network_create.go:284] running [docker network inspect addons-758852] to gather additional debugging logs...
	I1108 08:29:09.697873   10713 cli_runner.go:164] Run: docker network inspect addons-758852
	W1108 08:29:09.714633   10713 cli_runner.go:211] docker network inspect addons-758852 returned with exit code 1
	I1108 08:29:09.714663   10713 network_create.go:287] error running [docker network inspect addons-758852]: docker network inspect addons-758852: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-758852 not found
	I1108 08:29:09.714675   10713 network_create.go:289] output of [docker network inspect addons-758852]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-758852 not found
	
	** /stderr **
	I1108 08:29:09.714780   10713 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 08:29:09.732331   10713 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001bca860}
	I1108 08:29:09.732381   10713 network_create.go:124] attempt to create docker network addons-758852 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1108 08:29:09.732442   10713 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-758852 addons-758852
	I1108 08:29:09.787148   10713 network_create.go:108] docker network addons-758852 192.168.49.0/24 created
	I1108 08:29:09.787178   10713 kic.go:121] calculated static IP "192.168.49.2" for the "addons-758852" container
	I1108 08:29:09.787248   10713 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 08:29:09.804796   10713 cli_runner.go:164] Run: docker volume create addons-758852 --label name.minikube.sigs.k8s.io=addons-758852 --label created_by.minikube.sigs.k8s.io=true
	I1108 08:29:09.823202   10713 oci.go:103] Successfully created a docker volume addons-758852
	I1108 08:29:09.823269   10713 cli_runner.go:164] Run: docker run --rm --name addons-758852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --entrypoint /usr/bin/test -v addons-758852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 08:29:16.072216   10713 cli_runner.go:217] Completed: docker run --rm --name addons-758852-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --entrypoint /usr/bin/test -v addons-758852:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib: (6.248909991s)
	I1108 08:29:16.072252   10713 oci.go:107] Successfully prepared a docker volume addons-758852
	I1108 08:29:16.072314   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:16.072339   10713 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 08:29:16.072416   10713 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	I1108 08:29:20.459775   10713 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-758852:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.387319742s)
	I1108 08:29:20.459804   10713 kic.go:203] duration metric: took 4.387463054s to extract preloaded images to volume ...
	W1108 08:29:20.459890   10713 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 08:29:20.459931   10713 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 08:29:20.459975   10713 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 08:29:20.515236   10713 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-758852 --name addons-758852 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-758852 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-758852 --network addons-758852 --ip 192.168.49.2 --volume addons-758852:/var --security-opt apparmor=unconfined --memory=4096mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 08:29:20.837911   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Running}}
	I1108 08:29:20.856522   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:20.875805   10713 cli_runner.go:164] Run: docker exec addons-758852 stat /var/lib/dpkg/alternatives/iptables
	I1108 08:29:20.922017   10713 oci.go:144] the created container "addons-758852" has a running status.
	I1108 08:29:20.922045   10713 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa...
	I1108 08:29:21.458987   10713 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 08:29:21.483789   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:21.502661   10713 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 08:29:21.502682   10713 kic_runner.go:114] Args: [docker exec --privileged addons-758852 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 08:29:21.561054   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:21.578350   10713 machine.go:94] provisionDockerMachine start ...
	I1108 08:29:21.578446   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.594813   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.595086   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.595106   10713 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 08:29:21.720355   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758852
	
	I1108 08:29:21.720381   10713 ubuntu.go:182] provisioning hostname "addons-758852"
	I1108 08:29:21.720451   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.738880   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.739106   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.739124   10713 main.go:143] libmachine: About to run SSH command:
	sudo hostname addons-758852 && echo "addons-758852" | sudo tee /etc/hostname
	I1108 08:29:21.874553   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: addons-758852
	
	I1108 08:29:21.874640   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:21.891716   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:21.891922   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:21.891940   10713 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-758852' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-758852/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-758852' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 08:29:22.015788   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 08:29:22.015823   10713 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 08:29:22.015870   10713 ubuntu.go:190] setting up certificates
	I1108 08:29:22.015883   10713 provision.go:84] configureAuth start
	I1108 08:29:22.015930   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:22.032963   10713 provision.go:143] copyHostCerts
	I1108 08:29:22.033032   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 08:29:22.033141   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 08:29:22.033200   10713 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 08:29:22.033322   10713 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.addons-758852 san=[127.0.0.1 192.168.49.2 addons-758852 localhost minikube]
	I1108 08:29:22.606007   10713 provision.go:177] copyRemoteCerts
	I1108 08:29:22.606084   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 08:29:22.606116   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:22.624014   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:22.716425   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 08:29:22.734579   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 08:29:22.750471   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 08:29:22.767106   10713 provision.go:87] duration metric: took 751.209491ms to configureAuth
	I1108 08:29:22.767138   10713 ubuntu.go:206] setting minikube options for container-runtime
	I1108 08:29:22.767364   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:22.767491   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:22.784581   10713 main.go:143] libmachine: Using SSH client type: native
	I1108 08:29:22.784773   10713 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I1108 08:29:22.784789   10713 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 08:29:23.018252   10713 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 08:29:23.018292   10713 machine.go:97] duration metric: took 1.439908575s to provisionDockerMachine
	I1108 08:29:23.018307   10713 client.go:176] duration metric: took 13.951786614s to LocalClient.Create
	I1108 08:29:23.018333   10713 start.go:167] duration metric: took 13.951862471s to libmachine.API.Create "addons-758852"
	I1108 08:29:23.018346   10713 start.go:293] postStartSetup for "addons-758852" (driver="docker")
	I1108 08:29:23.018361   10713 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 08:29:23.018426   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 08:29:23.018480   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.035655   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.130641   10713 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 08:29:23.134093   10713 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 08:29:23.134122   10713 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 08:29:23.134136   10713 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 08:29:23.134197   10713 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 08:29:23.134221   10713 start.go:296] duration metric: took 115.868811ms for postStartSetup
	I1108 08:29:23.134504   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:23.152660   10713 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/config.json ...
	I1108 08:29:23.152951   10713 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:29:23.153001   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.170754   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.260241   10713 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 08:29:23.264493   10713 start.go:128] duration metric: took 14.199993259s to createHost
	I1108 08:29:23.264521   10713 start.go:83] releasing machines lock for "addons-758852", held for 14.200146889s
	I1108 08:29:23.264588   10713 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-758852
	I1108 08:29:23.282509   10713 ssh_runner.go:195] Run: cat /version.json
	I1108 08:29:23.282551   10713 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 08:29:23.282601   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.282554   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:23.301667   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.302211   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:23.443452   10713 ssh_runner.go:195] Run: systemctl --version
	I1108 08:29:23.449731   10713 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 08:29:23.481969   10713 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 08:29:23.486183   10713 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 08:29:23.486245   10713 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 08:29:23.511815   10713 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 08:29:23.511840   10713 start.go:496] detecting cgroup driver to use...
	I1108 08:29:23.511874   10713 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 08:29:23.511918   10713 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 08:29:23.526914   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 08:29:23.538845   10713 docker.go:218] disabling cri-docker service (if available) ...
	I1108 08:29:23.538899   10713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 08:29:23.554183   10713 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 08:29:23.570558   10713 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 08:29:23.648260   10713 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 08:29:23.734447   10713 docker.go:234] disabling docker service ...
	I1108 08:29:23.734496   10713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 08:29:23.751901   10713 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 08:29:23.763794   10713 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 08:29:23.845741   10713 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 08:29:23.923576   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 08:29:23.935360   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 08:29:23.948385   10713 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 08:29:23.948442   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.957973   10713 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 08:29:23.958022   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.966164   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.974230   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.982300   10713 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 08:29:23.989695   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:23.997621   10713 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:24.010406   10713 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 08:29:24.018711   10713 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 08:29:24.026504   10713 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 1
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I1108 08:29:24.026561   10713 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I1108 08:29:24.038008   10713 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 08:29:24.045625   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:24.121710   10713 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 08:29:24.220459   10713 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 08:29:24.220538   10713 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 08:29:24.224433   10713 start.go:564] Will wait 60s for crictl version
	I1108 08:29:24.224485   10713 ssh_runner.go:195] Run: which crictl
	I1108 08:29:24.228102   10713 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 08:29:24.252596   10713 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 08:29:24.252717   10713 ssh_runner.go:195] Run: crio --version
	I1108 08:29:24.279011   10713 ssh_runner.go:195] Run: crio --version
	I1108 08:29:24.307047   10713 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 08:29:24.308262   10713 cli_runner.go:164] Run: docker network inspect addons-758852 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 08:29:24.326419   10713 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1108 08:29:24.330447   10713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:29:24.340053   10713 kubeadm.go:884] updating cluster {Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketV
MnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 08:29:24.340168   10713 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 08:29:24.340237   10713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:29:24.371160   10713 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 08:29:24.371179   10713 crio.go:433] Images already preloaded, skipping extraction
	I1108 08:29:24.371220   10713 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 08:29:24.394869   10713 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 08:29:24.394891   10713 cache_images.go:86] Images are preloaded, skipping loading
	I1108 08:29:24.394899   10713 kubeadm.go:935] updating node { 192.168.49.2 8443 v1.34.1 crio true true} ...
	I1108 08:29:24.394986   10713 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-758852 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 08:29:24.395056   10713 ssh_runner.go:195] Run: crio config
	I1108 08:29:24.439047   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:29:24.439072   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:29:24.439087   10713 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 08:29:24.439108   10713 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-758852 NodeName:addons-758852 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernet
es/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 08:29:24.439217   10713 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-758852"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 08:29:24.439267   10713 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 08:29:24.447041   10713 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 08:29:24.447098   10713 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 08:29:24.454648   10713 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1108 08:29:24.466756   10713 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 08:29:24.481619   10713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2209 bytes)
	I1108 08:29:24.494122   10713 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1108 08:29:24.497633   10713 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 08:29:24.507686   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:24.586508   10713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:29:24.610724   10713 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852 for IP: 192.168.49.2
	I1108 08:29:24.610747   10713 certs.go:195] generating shared ca certs ...
	I1108 08:29:24.610766   10713 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.610880   10713 certs.go:241] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 08:29:24.853057   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt ...
	I1108 08:29:24.853094   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt: {Name:mk213ab2be08fef7a40a46410e4bb3f131841b3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.853295   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key ...
	I1108 08:29:24.853311   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key: {Name:mk7dd5dc5a93a882dec5e46ef4c2967f6e5aad7f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:24.853418   10713 certs.go:241] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 08:29:25.096361   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt ...
	I1108 08:29:25.096394   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt: {Name:mk8cf02648c02d2efd08c9f82d81d1c0a3d615a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.096580   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key ...
	I1108 08:29:25.096596   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key: {Name:mk6a9bff750f1ffb58c096df91bd477b5cd6f4e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.096695   10713 certs.go:257] generating profile certs ...
	I1108 08:29:25.096781   10713 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key
	I1108 08:29:25.096800   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt with IP's: []
	I1108 08:29:25.515475   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt ...
	I1108 08:29:25.515509   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: {Name:mk9591853ee1a952a13591d356c4622190570821 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.515681   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key ...
	I1108 08:29:25.515693   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.key: {Name:mkcd506f9f128490c95a640fd4ed9a978dcc7b11 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.515762   10713 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f
	I1108 08:29:25.515779   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1108 08:29:25.663805   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f ...
	I1108 08:29:25.663838   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f: {Name:mk46995d4732edbc9dccbf302c071ac5e2e50a02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.663997   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f ...
	I1108 08:29:25.664010   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f: {Name:mkb604a051c110e856b567bd8d8a60de60d4b1a3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.664111   10713 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt.6ae8e95f -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt
	I1108 08:29:25.664196   10713 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key.6ae8e95f -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key
	I1108 08:29:25.664259   10713 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key
	I1108 08:29:25.664276   10713 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt with IP's: []
	I1108 08:29:25.771560   10713 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt ...
	I1108 08:29:25.771591   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt: {Name:mk810bed9f024a88fb8db633e1bff5f363c3ec1f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.771763   10713 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key ...
	I1108 08:29:25.771776   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key: {Name:mkc12e8a3f3938a6071cf8c961543fa2701543e7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:25.771958   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 08:29:25.771991   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 08:29:25.772014   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 08:29:25.772034   10713 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 08:29:25.772553   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 08:29:25.790007   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 08:29:25.806857   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 08:29:25.823558   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 08:29:25.840696   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 08:29:25.857233   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 08:29:25.873610   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 08:29:25.890336   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 08:29:25.906966   10713 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 08:29:25.925271   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 08:29:25.936989   10713 ssh_runner.go:195] Run: openssl version
	I1108 08:29:25.942736   10713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 08:29:25.953209   10713 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.956667   10713 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.956710   10713 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 08:29:25.990039   10713 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 08:29:25.998321   10713 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 08:29:26.001850   10713 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 08:29:26.001906   10713 kubeadm.go:401] StartCluster: {Name:addons-758852 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:addons-758852 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMne
tClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:29:26.001978   10713 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:29:26.002016   10713 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:29:26.027925   10713 cri.go:89] found id: ""
	I1108 08:29:26.027993   10713 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 08:29:26.035969   10713 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 08:29:26.043819   10713 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 08:29:26.043880   10713 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 08:29:26.052120   10713 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 08:29:26.052139   10713 kubeadm.go:158] found existing configuration files:
	
	I1108 08:29:26.052197   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 08:29:26.060096   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 08:29:26.060144   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 08:29:26.067495   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 08:29:26.074892   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 08:29:26.074948   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 08:29:26.081707   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 08:29:26.088534   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 08:29:26.088580   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 08:29:26.095205   10713 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 08:29:26.102277   10713 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 08:29:26.102338   10713 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 08:29:26.109163   10713 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 08:29:26.162900   10713 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 08:29:26.216230   10713 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 08:29:36.392057   10713 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 08:29:36.392133   10713 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 08:29:36.392225   10713 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 08:29:36.392304   10713 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 08:29:36.392347   10713 kubeadm.go:319] OS: Linux
	I1108 08:29:36.392393   10713 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 08:29:36.392455   10713 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 08:29:36.392540   10713 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 08:29:36.392591   10713 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 08:29:36.392632   10713 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 08:29:36.392707   10713 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 08:29:36.392786   10713 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 08:29:36.392846   10713 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 08:29:36.392961   10713 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 08:29:36.393099   10713 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 08:29:36.393251   10713 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 08:29:36.393354   10713 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 08:29:36.395871   10713 out.go:252]   - Generating certificates and keys ...
	I1108 08:29:36.395950   10713 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 08:29:36.396028   10713 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 08:29:36.396114   10713 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 08:29:36.396174   10713 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 08:29:36.396234   10713 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 08:29:36.396326   10713 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 08:29:36.396402   10713 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 08:29:36.396568   10713 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [addons-758852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 08:29:36.396648   10713 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 08:29:36.396770   10713 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [addons-758852 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1108 08:29:36.396855   10713 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 08:29:36.396956   10713 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 08:29:36.397014   10713 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 08:29:36.397075   10713 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 08:29:36.397124   10713 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 08:29:36.397179   10713 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 08:29:36.397226   10713 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 08:29:36.397303   10713 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 08:29:36.397375   10713 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 08:29:36.397462   10713 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 08:29:36.397538   10713 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 08:29:36.398863   10713 out.go:252]   - Booting up control plane ...
	I1108 08:29:36.398948   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 08:29:36.399036   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 08:29:36.399121   10713 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 08:29:36.399238   10713 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 08:29:36.399400   10713 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 08:29:36.399572   10713 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 08:29:36.399697   10713 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 08:29:36.399772   10713 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 08:29:36.399914   10713 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 08:29:36.400073   10713 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 08:29:36.400154   10713 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.000882834s
	I1108 08:29:36.400237   10713 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 08:29:36.400332   10713 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.49.2:8443/livez
	I1108 08:29:36.400444   10713 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 08:29:36.400533   10713 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 08:29:36.400634   10713 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.154599506s
	I1108 08:29:36.400746   10713 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.597597859s
	I1108 08:29:36.400833   10713 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501231029s
	I1108 08:29:36.400923   10713 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 08:29:36.401058   10713 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 08:29:36.401110   10713 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 08:29:36.401335   10713 kubeadm.go:319] [mark-control-plane] Marking the node addons-758852 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 08:29:36.401410   10713 kubeadm.go:319] [bootstrap-token] Using token: hf8a7f.2k8dlzg3ck7lp7gu
	I1108 08:29:36.402891   10713 out.go:252]   - Configuring RBAC rules ...
	I1108 08:29:36.403005   10713 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 08:29:36.403121   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 08:29:36.403315   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 08:29:36.403441   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 08:29:36.403581   10713 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 08:29:36.403692   10713 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 08:29:36.403822   10713 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 08:29:36.403885   10713 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 08:29:36.403956   10713 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 08:29:36.403965   10713 kubeadm.go:319] 
	I1108 08:29:36.404044   10713 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 08:29:36.404052   10713 kubeadm.go:319] 
	I1108 08:29:36.404133   10713 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 08:29:36.404146   10713 kubeadm.go:319] 
	I1108 08:29:36.404185   10713 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 08:29:36.404269   10713 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 08:29:36.404357   10713 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 08:29:36.404367   10713 kubeadm.go:319] 
	I1108 08:29:36.404414   10713 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 08:29:36.404420   10713 kubeadm.go:319] 
	I1108 08:29:36.404469   10713 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 08:29:36.404479   10713 kubeadm.go:319] 
	I1108 08:29:36.404553   10713 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 08:29:36.404658   10713 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 08:29:36.404751   10713 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 08:29:36.404765   10713 kubeadm.go:319] 
	I1108 08:29:36.404876   10713 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 08:29:36.404991   10713 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 08:29:36.405001   10713 kubeadm.go:319] 
	I1108 08:29:36.405126   10713 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token hf8a7f.2k8dlzg3ck7lp7gu \
	I1108 08:29:36.405240   10713 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 08:29:36.405260   10713 kubeadm.go:319] 	--control-plane 
	I1108 08:29:36.405264   10713 kubeadm.go:319] 
	I1108 08:29:36.405385   10713 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 08:29:36.405398   10713 kubeadm.go:319] 
	I1108 08:29:36.405499   10713 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token hf8a7f.2k8dlzg3ck7lp7gu \
	I1108 08:29:36.405633   10713 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 08:29:36.405646   10713 cni.go:84] Creating CNI manager for ""
	I1108 08:29:36.405655   10713 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:29:36.407016   10713 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 08:29:36.408347   10713 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 08:29:36.412850   10713 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 08:29:36.412867   10713 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 08:29:36.425881   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 08:29:36.629915   10713 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 08:29:36.629962   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:36.629966   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-758852 minikube.k8s.io/updated_at=2025_11_08T08_29_36_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=addons-758852 minikube.k8s.io/primary=true
	I1108 08:29:36.639961   10713 ops.go:34] apiserver oom_adj: -16
	I1108 08:29:36.713165   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:37.213821   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:37.714122   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:38.213643   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:38.713607   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:39.214223   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:39.713538   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:40.214195   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:40.714316   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:41.213363   10713 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 08:29:41.285954   10713 kubeadm.go:1114] duration metric: took 4.656044772s to wait for elevateKubeSystemPrivileges
	I1108 08:29:41.285997   10713 kubeadm.go:403] duration metric: took 15.284091828s to StartCluster
	I1108 08:29:41.286020   10713 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:41.286167   10713 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:29:41.286745   10713 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 08:29:41.286972   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 08:29:41.287005   10713 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 08:29:41.287075   10713 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:true storage-provisioner:true storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1108 08:29:41.287195   10713 addons.go:70] Setting yakd=true in profile "addons-758852"
	I1108 08:29:41.287219   10713 addons.go:239] Setting addon yakd=true in "addons-758852"
	I1108 08:29:41.287229   10713 addons.go:70] Setting inspektor-gadget=true in profile "addons-758852"
	I1108 08:29:41.287253   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287259   10713 addons.go:239] Setting addon inspektor-gadget=true in "addons-758852"
	I1108 08:29:41.287264   10713 addons.go:70] Setting default-storageclass=true in profile "addons-758852"
	I1108 08:29:41.287290   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:41.287310   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287326   10713 addons.go:70] Setting amd-gpu-device-plugin=true in profile "addons-758852"
	I1108 08:29:41.287339   10713 addons.go:239] Setting addon amd-gpu-device-plugin=true in "addons-758852"
	I1108 08:29:41.287339   10713 addons.go:70] Setting storage-provisioner-rancher=true in profile "addons-758852"
	I1108 08:29:41.287355   10713 addons_storage_classes.go:34] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-758852"
	I1108 08:29:41.287358   10713 addons.go:70] Setting storage-provisioner=true in profile "addons-758852"
	I1108 08:29:41.287370   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287376   10713 addons.go:70] Setting registry-creds=true in profile "addons-758852"
	I1108 08:29:41.287389   10713 addons.go:239] Setting addon storage-provisioner=true in "addons-758852"
	I1108 08:29:41.287404   10713 addons.go:239] Setting addon registry-creds=true in "addons-758852"
	I1108 08:29:41.287417   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287430   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287716   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287857   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287869   10713 addons.go:70] Setting gcp-auth=true in profile "addons-758852"
	I1108 08:29:41.287872   10713 addons.go:70] Setting ingress=true in profile "addons-758852"
	I1108 08:29:41.287885   10713 addons.go:239] Setting addon ingress=true in "addons-758852"
	I1108 08:29:41.287887   10713 mustload.go:66] Loading cluster: addons-758852
	I1108 08:29:41.287891   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287909   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.287960   10713 addons.go:70] Setting nvidia-device-plugin=true in profile "addons-758852"
	I1108 08:29:41.287974   10713 addons.go:239] Setting addon nvidia-device-plugin=true in "addons-758852"
	I1108 08:29:41.287996   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.288003   10713 addons.go:70] Setting registry=true in profile "addons-758852"
	I1108 08:29:41.288035   10713 addons.go:239] Setting addon registry=true in "addons-758852"
	I1108 08:29:41.288047   10713 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:29:41.288060   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.288262   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288334   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288459   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.288572   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287858   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.289252   10713 addons.go:70] Setting ingress-dns=true in profile "addons-758852"
	I1108 08:29:41.289271   10713 addons.go:239] Setting addon ingress-dns=true in "addons-758852"
	I1108 08:29:41.289315   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.289409   10713 addons.go:70] Setting volcano=true in profile "addons-758852"
	I1108 08:29:41.289421   10713 addons.go:239] Setting addon volcano=true in "addons-758852"
	I1108 08:29:41.289447   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.289611   10713 out.go:179] * Verifying Kubernetes components...
	I1108 08:29:41.289784   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.289811   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.290020   10713 addons.go:70] Setting cloud-spanner=true in profile "addons-758852"
	I1108 08:29:41.290043   10713 addons.go:239] Setting addon cloud-spanner=true in "addons-758852"
	I1108 08:29:41.290070   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.290220   10713 addons.go:70] Setting metrics-server=true in profile "addons-758852"
	I1108 08:29:41.290229   10713 addons.go:239] Setting addon metrics-server=true in "addons-758852"
	I1108 08:29:41.290243   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.290346   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.291118   10713 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 08:29:41.291330   10713 addons.go:70] Setting csi-hostpath-driver=true in profile "addons-758852"
	I1108 08:29:41.291478   10713 addons.go:239] Setting addon csi-hostpath-driver=true in "addons-758852"
	I1108 08:29:41.291509   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.293076   10713 addons.go:70] Setting volumesnapshots=true in profile "addons-758852"
	I1108 08:29:41.293102   10713 addons.go:239] Setting addon volumesnapshots=true in "addons-758852"
	I1108 08:29:41.293126   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.293605   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.293713   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.287311   10713 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "addons-758852"
	I1108 08:29:41.287858   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.294661   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.302410   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.303308   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.353069   10713 out.go:179]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1108 08:29:41.353220   10713 out.go:179]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.46.0
	I1108 08:29:41.354437   10713 addons.go:436] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:29:41.354459   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1108 08:29:41.354521   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.354780   10713 addons.go:436] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:29:41.354812   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (15034 bytes)
	I1108 08:29:41.354866   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.364375   10713 addons.go:239] Setting addon storage-provisioner-rancher=true in "addons-758852"
	I1108 08:29:41.364433   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.365924   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.371193   10713 out.go:179]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.18.0
	I1108 08:29:41.371901   10713 out.go:179]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.43
	I1108 08:29:41.372029   10713 out.go:179]   - Using image docker.io/upmcenterprises/registry-creds:1.10
	I1108 08:29:41.373430   10713 addons.go:436] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:29:41.373450   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1108 08:29:41.373506   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.373734   10713 addons.go:436] installing /etc/kubernetes/addons/deployment.yaml
	I1108 08:29:41.373746   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1108 08:29:41.373785   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.373957   10713 addons.go:436] installing /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:29:41.373970   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-creds-rc.yaml (3306 bytes)
	I1108 08:29:41.374008   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	W1108 08:29:41.383725   10713 out.go:285] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1108 08:29:41.384116   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1108 08:29:41.384234   10713 out.go:179]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1108 08:29:41.384328   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/controller:v1.13.3
	I1108 08:29:41.385519   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1108 08:29:41.385537   10713 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1108 08:29:41.385552   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.385776   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.387133   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1108 08:29:41.387153   10713 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1108 08:29:41.387202   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.387869   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:41.387936   10713 out.go:179]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.9
	I1108 08:29:41.389503   10713 out.go:179]   - Using image docker.io/registry:3.0.0
	I1108 08:29:41.389549   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:41.390668   10713 addons.go:436] installing /etc/kubernetes/addons/registry-rc.yaml
	I1108 08:29:41.390719   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1108 08:29:41.390771   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.390901   10713 addons.go:436] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:29:41.390907   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1108 08:29:41.390939   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.395130   10713 addons.go:239] Setting addon default-storageclass=true in "addons-758852"
	I1108 08:29:41.395175   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:41.395649   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:41.400722   10713 out.go:179]   - Using image docker.io/kicbase/minikube-ingress-dns:0.0.4
	I1108 08:29:41.401985   10713 addons.go:436] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:29:41.402003   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2889 bytes)
	I1108 08:29:41.402202   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.402364   10713 out.go:179]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.8.0
	I1108 08:29:41.404988   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1108 08:29:41.405007   10713 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1108 08:29:41.405036   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1108 08:29:41.405232   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.412785   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1108 08:29:41.415749   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1108 08:29:41.417442   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1108 08:29:41.419580   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1108 08:29:41.421562   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1108 08:29:41.423033   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1108 08:29:41.424665   10713 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 08:29:41.426567   10713 out.go:179]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1108 08:29:41.426770   10713 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:29:41.426918   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 08:29:41.427118   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.427813   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1108 08:29:41.427932   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1108 08:29:41.428928   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.445504   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.447611   10713 out.go:179]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1108 08:29:41.449127   10713 out.go:179]   - Using image docker.io/busybox:stable
	I1108 08:29:41.450031   10713 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 08:29:41.451421   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.451710   10713 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:29:41.451838   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1108 08:29:41.451905   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.455725   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.455732   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.456120   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.459138   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.459648   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.477804   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.479353   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.495460   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.496370   10713 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 08:29:41.496387   10713 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 08:29:41.496435   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:41.498486   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.498629   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.503749   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	W1108 08:29:41.506423   10713 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I1108 08:29:41.506461   10713 retry.go:31] will retry after 144.57419ms: ssh: handshake failed: EOF
	I1108 08:29:41.515373   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.526656   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:41.527863   10713 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 08:29:41.593933   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1108 08:29:41.620769   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1108 08:29:41.620799   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1108 08:29:41.626645   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ig-deployment.yaml
	I1108 08:29:41.628799   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1108 08:29:41.640071   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1108 08:29:41.640096   10713 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1108 08:29:41.643633   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1108 08:29:41.646606   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1108 08:29:41.646733   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1108 08:29:41.654364   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 08:29:41.665030   10713 addons.go:436] installing /etc/kubernetes/addons/registry-svc.yaml
	I1108 08:29:41.665123   10713 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1108 08:29:41.675641   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1108 08:29:41.675718   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1108 08:29:41.677855   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1108 08:29:41.696532   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 08:29:41.697158   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1108 08:29:41.698341   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1108 08:29:41.698361   10713 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1108 08:29:41.700023   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml
	I1108 08:29:41.712602   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1108 08:29:41.712765   10713 addons.go:436] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1108 08:29:41.712901   10713 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1108 08:29:41.715913   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1108 08:29:41.715938   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1108 08:29:41.740642   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1108 08:29:41.740673   10713 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1108 08:29:41.746531   10713 addons.go:436] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:29:41.746552   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1108 08:29:41.764315   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1108 08:29:41.764341   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1108 08:29:41.775903   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1108 08:29:41.775926   10713 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1108 08:29:41.797483   10713 addons.go:436] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:29:41.799100   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1108 08:29:41.820665   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1108 08:29:41.822165   10713 addons.go:436] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:41.822232   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1108 08:29:41.833039   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1108 08:29:41.833061   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1108 08:29:41.833885   10713 start.go:977] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1108 08:29:41.838499   10713 node_ready.go:35] waiting up to 6m0s for node "addons-758852" to be "Ready" ...
	I1108 08:29:41.865791   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1108 08:29:41.887711   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1108 08:29:41.887737   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1108 08:29:41.895716   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:41.898237   10713 addons.go:436] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1108 08:29:41.898332   10713 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1108 08:29:41.951037   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1108 08:29:41.951126   10713 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1108 08:29:41.966275   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1108 08:29:41.966317   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1108 08:29:41.995714   10713 addons.go:436] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:29:41.995740   10713 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1108 08:29:42.013791   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1108 08:29:42.013831   10713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1108 08:29:42.051398   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1108 08:29:42.078066   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1108 08:29:42.078099   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1108 08:29:42.131921   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1108 08:29:42.131944   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1108 08:29:42.177750   10713 addons.go:436] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:29:42.177859   10713 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1108 08:29:42.211080   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1108 08:29:42.350230   10713 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-758852" context rescaled to 1 replicas
	I1108 08:29:42.790938   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (1.162101916s)
	I1108 08:29:42.790983   10713 addons.go:480] Verifying addon ingress=true in "addons-758852"
	I1108 08:29:42.791059   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (1.147397849s)
	I1108 08:29:42.791213   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.136823558s)
	I1108 08:29:42.791343   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.094779249s)
	I1108 08:29:42.791591   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.11340205s)
	I1108 08:29:42.791635   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (1.094448856s)
	I1108 08:29:42.791698   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/registry-creds-rc.yaml: (1.091628596s)
	I1108 08:29:42.791788   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (1.078960033s)
	I1108 08:29:42.791841   10713 addons.go:480] Verifying addon registry=true in "addons-758852"
	I1108 08:29:42.792559   10713 out.go:179] * Verifying ingress addon...
	I1108 08:29:42.793584   10713 out.go:179] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-758852 service yakd-dashboard -n yakd-dashboard
	
	I1108 08:29:42.793627   10713 out.go:179] * Verifying registry addon...
	I1108 08:29:42.795510   10713 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1108 08:29:42.796171   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	W1108 08:29:42.797342   10713 out.go:285] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1108 08:29:42.798821   10713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 08:29:42.798840   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:42.798942   10713 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1108 08:29:42.798955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:43.222215   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (1.326453594s)
	W1108 08:29:43.222270   10713 addons.go:462] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:29:43.222315   10713 retry.go:31] will retry after 272.143188ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1108 08:29:43.222347   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.170900163s)
	I1108 08:29:43.222383   10713 addons.go:480] Verifying addon metrics-server=true in "addons-758852"
	I1108 08:29:43.222550   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (1.011410422s)
	I1108 08:29:43.222572   10713 addons.go:480] Verifying addon csi-hostpath-driver=true in "addons-758852"
	I1108 08:29:43.224419   10713 out.go:179] * Verifying csi-hostpath-driver addon...
	I1108 08:29:43.226599   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1108 08:29:43.228907   10713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 08:29:43.228928   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:43.298710   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:43.299119   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:43.494897   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1108 08:29:43.729392   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:43.830311   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:43.830527   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:43.841640   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:44.229639   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:44.330819   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:44.331000   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:44.729630   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:44.798937   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:44.798994   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.230265   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:45.298663   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:45.298777   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.729492   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:45.829912   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:45.830001   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:45.971045   10713 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.476108663s)
	I1108 08:29:46.229775   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:46.330913   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:46.331073   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:46.341311   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:46.729901   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:46.830941   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:46.831019   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:47.230019   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:47.298642   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:47.298699   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:47.730434   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:47.830944   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:47.831041   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:48.230348   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:48.298795   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:48.298890   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:48.730257   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:48.831184   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:48.831407   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:48.841537   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:48.993629   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1108 08:29:48.993703   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:49.012500   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:49.118098   10713 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1108 08:29:49.130976   10713 addons.go:239] Setting addon gcp-auth=true in "addons-758852"
	I1108 08:29:49.131022   10713 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:29:49.131502   10713 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:29:49.149236   10713 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1108 08:29:49.149301   10713 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:29:49.166615   10713 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:29:49.229505   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:49.258101   10713 out.go:179]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.6.3
	I1108 08:29:49.259362   10713 out.go:179]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1108 08:29:49.260531   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1108 08:29:49.260549   10713 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1108 08:29:49.273999   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1108 08:29:49.274019   10713 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1108 08:29:49.286431   10713 addons.go:436] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:29:49.286451   10713 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1108 08:29:49.298317   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:49.298892   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:49.299519   10713 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1108 08:29:49.594965   10713 addons.go:480] Verifying addon gcp-auth=true in "addons-758852"
	I1108 08:29:49.596156   10713 out.go:179] * Verifying gcp-auth addon...
	I1108 08:29:49.597984   10713 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1108 08:29:49.600352   10713 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1108 08:29:49.600370   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:49.730112   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:49.799045   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:49.799086   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:50.101023   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:50.229682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:50.298324   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:50.298960   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:50.601507   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:50.730728   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:50.798511   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:50.798945   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:51.101077   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:51.229650   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:51.298546   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:51.298977   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:29:51.341478   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:51.601647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:51.730261   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:51.798789   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:51.798891   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.100862   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:52.229267   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:52.298888   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:52.299149   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.601026   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:52.729696   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:52.798351   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:52.799036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.101454   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:53.229736   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:53.298447   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:53.299015   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.601183   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:53.729493   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:53.799087   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:53.799155   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:53.841410   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:54.100866   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:54.229667   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:54.298307   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:54.298882   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:54.601245   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:54.729968   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:54.798743   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:54.798814   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.100299   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:55.229761   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:55.298614   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:55.299015   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.600993   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:55.729511   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:55.799088   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:55.799195   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:55.841624   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:56.101233   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:56.230173   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:56.299018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:56.299133   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:56.601033   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:56.729952   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:56.798491   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:56.798650   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:57.100428   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:57.229916   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:57.298323   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:57.298558   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:57.601540   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:57.730037   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:57.798803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:57.798854   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:58.100563   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:58.230052   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:58.298618   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:58.298717   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:29:58.341753   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:29:58.601886   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:58.729836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:58.798634   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:58.798745   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.100539   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:59.230261   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:59.298846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:29:59.300357   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.601119   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:29:59.729682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:29:59.798314   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:29:59.798824   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:00.100954   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:00.229362   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:00.299022   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:00.299086   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:00.601378   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:00.730338   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:00.799038   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:00.799056   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:00.841495   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:01.101007   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:01.229620   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:01.298585   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:01.299240   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:01.601512   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:01.729943   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:01.798631   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:01.798791   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.100986   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:02.229510   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:02.298988   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:02.299037   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.600857   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:02.729829   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:02.798740   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:02.799036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:02.841647   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:03.101348   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:03.229682   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:03.298549   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:03.298836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:03.601229   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:03.729955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:03.798246   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:03.798400   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:04.101497   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:04.230232   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:04.298691   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:04.298772   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:04.601030   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:04.729937   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:04.798404   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:04.798453   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:04.841683   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:05.101258   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:05.229849   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:05.298488   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:05.298918   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:05.600926   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:05.729217   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:05.798690   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:05.798861   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:06.101102   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:06.229654   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:06.298335   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:06.298933   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:06.601421   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:06.730210   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:06.798676   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:06.798798   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:07.100635   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:07.228976   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:07.298340   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:07.298497   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:07.341641   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:07.601197   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:07.729634   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:07.798073   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:07.798700   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.100767   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:08.229108   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:08.298451   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.298650   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:08.600416   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:08.730075   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:08.798632   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:08.798773   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:09.100661   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:09.229461   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:09.299118   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:09.299238   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:09.341703   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:09.601401   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:09.729963   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:09.798800   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:09.798844   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:10.103966   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:10.229343   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:10.298844   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:10.298933   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:10.600851   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:10.729390   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:10.798901   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:10.798955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.100647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:11.229105   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:11.298689   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:11.298899   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.600998   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:11.729861   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:11.798366   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:11.799143   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:11.841380   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:12.101073   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:12.229625   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:12.299085   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:12.299128   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:12.601333   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:12.729770   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:12.798836   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:12.799095   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.101014   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:13.229396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:13.298757   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.298889   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:13.600712   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:13.729100   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:13.798792   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:13.798959   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:14.101249   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:14.230097   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:14.298783   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:14.298846   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:14.341142   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:14.601228   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:14.729652   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:14.798257   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:14.799050   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:15.101172   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:15.229737   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:15.298683   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:15.299098   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:15.601098   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:15.729774   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:15.798427   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:15.798948   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.100926   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:16.229268   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:16.298792   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.298932   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:16.341397   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:16.600936   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:16.729499   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:16.799171   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:16.799412   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:17.101186   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:17.229718   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:17.298454   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:17.299134   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:17.601180   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:17.729891   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:17.798344   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:17.798500   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:18.101471   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:18.230036   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:18.298538   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:18.298672   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:18.600803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:18.729319   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:18.798580   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:18.798591   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	W1108 08:30:18.841850   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:19.100982   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:19.229679   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:19.298272   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:19.298884   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:19.601056   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:19.729744   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:19.798234   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:19.798954   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.101312   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:20.229939   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:20.298190   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.298360   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:20.601082   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:20.729869   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:20.798455   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:20.798609   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:21.101341   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:21.229690   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:21.298465   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:21.299018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	W1108 08:30:21.341548   10713 node_ready.go:57] node "addons-758852" has "Ready":"False" status (will retry)
	I1108 08:30:21.601897   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:21.729587   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:21.799254   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:21.799259   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:22.101255   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:22.229938   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:22.301924   10713 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1108 08:30:22.301952   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:22.303065   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:22.341407   10713 node_ready.go:49] node "addons-758852" is "Ready"
	I1108 08:30:22.341440   10713 node_ready.go:38] duration metric: took 40.502908626s for node "addons-758852" to be "Ready" ...
	I1108 08:30:22.341457   10713 api_server.go:52] waiting for apiserver process to appear ...
	I1108 08:30:22.341511   10713 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:30:22.358201   10713 api_server.go:72] duration metric: took 41.0711634s to wait for apiserver process to appear ...
	I1108 08:30:22.358229   10713 api_server.go:88] waiting for apiserver healthz status ...
	I1108 08:30:22.358247   10713 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1108 08:30:22.362520   10713 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1108 08:30:22.363313   10713 api_server.go:141] control plane version: v1.34.1
	I1108 08:30:22.363336   10713 api_server.go:131] duration metric: took 5.101345ms to wait for apiserver health ...
	I1108 08:30:22.363344   10713 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 08:30:22.367935   10713 system_pods.go:59] 20 kube-system pods found
	I1108 08:30:22.367963   10713 system_pods.go:61] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending
	I1108 08:30:22.367968   10713 system_pods.go:61] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending
	I1108 08:30:22.367971   10713 system_pods.go:61] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending
	I1108 08:30:22.367979   10713 system_pods.go:61] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.367984   10713 system_pods.go:61] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending
	I1108 08:30:22.367990   10713 system_pods.go:61] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.367994   10713 system_pods.go:61] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.367997   10713 system_pods.go:61] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.368000   10713 system_pods.go:61] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.368005   10713 system_pods.go:61] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.368009   10713 system_pods.go:61] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.368013   10713 system_pods.go:61] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.368019   10713 system_pods.go:61] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.368026   10713 system_pods.go:61] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending
	I1108 08:30:22.368031   10713 system_pods.go:61] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.368036   10713 system_pods.go:61] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.368041   10713 system_pods.go:61] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending
	I1108 08:30:22.368044   10713 system_pods.go:61] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending
	I1108 08:30:22.368048   10713 system_pods.go:61] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending
	I1108 08:30:22.368053   10713 system_pods.go:61] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.368062   10713 system_pods.go:74] duration metric: took 4.712963ms to wait for pod list to return data ...
	I1108 08:30:22.368070   10713 default_sa.go:34] waiting for default service account to be created ...
	I1108 08:30:22.369892   10713 default_sa.go:45] found service account: "default"
	I1108 08:30:22.369912   10713 default_sa.go:55] duration metric: took 1.833838ms for default service account to be created ...
	I1108 08:30:22.369920   10713 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 08:30:22.372587   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:22.372612   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending
	I1108 08:30:22.372616   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending
	I1108 08:30:22.372620   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending
	I1108 08:30:22.372626   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.372630   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending
	I1108 08:30:22.372675   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.372680   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.372686   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.372690   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.372695   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.372702   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.372707   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.372714   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.372721   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:22.372731   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.372736   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.372742   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending
	I1108 08:30:22.372748   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending
	I1108 08:30:22.372753   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending
	I1108 08:30:22.372757   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.372772   10713 retry.go:31] will retry after 294.577442ms: missing components: kube-dns
	I1108 08:30:22.601389   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:22.703638   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:22.703676   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:22.703688   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:22.703698   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 08:30:22.703706   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:22.703715   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 08:30:22.703721   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:22.703729   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:22.703735   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:22.703744   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:22.703758   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:22.703766   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:22.703773   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:22.703784   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:22.703797   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:22.703811   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:22.703823   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:22.703836   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:22.703848   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:22.703861   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:22.703872   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 08:30:22.703895   10713 retry.go:31] will retry after 317.889685ms: missing components: kube-dns
	I1108 08:30:22.741765   10713 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1108 08:30:22.741794   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:22.802464   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:22.802736   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:23.027025   10713 system_pods.go:86] 20 kube-system pods found
	I1108 08:30:23.027066   10713 system_pods.go:89] "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Pending / Ready:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin]) / ContainersReady:ContainersNotReady (containers with unready status: [amd-gpu-device-plugin])
	I1108 08:30:23.027076   10713 system_pods.go:89] "coredns-66bc5c9577-6cwbz" [496e1d39-3a98-433e-8356-a37a31a64b2e] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 08:30:23.027089   10713 system_pods.go:89] "csi-hostpath-attacher-0" [e9edaa32-cd2a-470c-b0e7-786d171571f8] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I1108 08:30:23.027097   10713 system_pods.go:89] "csi-hostpath-resizer-0" [cf5e5a76-b057-4f8b-ad2e-1836ca7d3838] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I1108 08:30:23.027106   10713 system_pods.go:89] "csi-hostpathplugin-rtgg7" [69fdf29c-ec5d-40ee-adda-653af290a034] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I1108 08:30:23.027122   10713 system_pods.go:89] "etcd-addons-758852" [8f5f6372-5203-4988-9b41-b9c07e306930] Running
	I1108 08:30:23.027128   10713 system_pods.go:89] "kindnet-6qtgf" [2a7a173e-dc1e-47b3-8535-9c9737e79a35] Running
	I1108 08:30:23.027138   10713 system_pods.go:89] "kube-apiserver-addons-758852" [775bfc07-86dc-4663-a4e6-f2cd4e6cf250] Running
	I1108 08:30:23.027144   10713 system_pods.go:89] "kube-controller-manager-addons-758852" [b3570c4c-1b78-4fc6-92c6-07fe8ef67399] Running
	I1108 08:30:23.027152   10713 system_pods.go:89] "kube-ingress-dns-minikube" [ef58b03e-0552-4023-a055-ad5dda85abb6] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I1108 08:30:23.027162   10713 system_pods.go:89] "kube-proxy-fkvsn" [89eb835c-bbb8-444a-8c35-7a02b86519aa] Running
	I1108 08:30:23.027168   10713 system_pods.go:89] "kube-scheduler-addons-758852" [52e699d2-377e-43f0-a57a-17e693cdd23d] Running
	I1108 08:30:23.027180   10713 system_pods.go:89] "metrics-server-85b7d694d7-g65zk" [03107c9e-9301-427d-9799-b0b0d4ceaf14] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I1108 08:30:23.027188   10713 system_pods.go:89] "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Pending / Ready:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr]) / ContainersReady:ContainersNotReady (containers with unready status: [nvidia-device-plugin-ctr])
	I1108 08:30:23.027200   10713 system_pods.go:89] "registry-6b586f9694-8mkgh" [87ce4e2c-d92b-4d6a-b33c-0069d365d282] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I1108 08:30:23.027209   10713 system_pods.go:89] "registry-creds-764b6fb674-rjbxd" [6574dc0f-978b-434f-99a1-1452a69af882] Pending / Ready:ContainersNotReady (containers with unready status: [registry-creds]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-creds])
	I1108 08:30:23.027218   10713 system_pods.go:89] "registry-proxy-j697c" [73ccf46a-6d6f-47d0-a0bc-b62b748f5db5] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I1108 08:30:23.027226   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-8dlk9" [659e0a87-ae36-4521-be11-3b5ddd5d7b12] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:23.027237   10713 system_pods.go:89] "snapshot-controller-7d9fbc56b8-vkhw9" [b9b556ed-8b77-4d81-b441-3d54dc6bc3a2] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I1108 08:30:23.027243   10713 system_pods.go:89] "storage-provisioner" [11ff2142-3248-4a79-87d5-34187572d1c6] Running
	I1108 08:30:23.027256   10713 system_pods.go:126] duration metric: took 657.330258ms to wait for k8s-apps to be running ...
	I1108 08:30:23.027265   10713 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 08:30:23.027316   10713 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:30:23.044042   10713 system_svc.go:56] duration metric: took 16.769177ms WaitForService to wait for kubelet
	I1108 08:30:23.044074   10713 kubeadm.go:587] duration metric: took 41.757039106s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 08:30:23.044094   10713 node_conditions.go:102] verifying NodePressure condition ...
	I1108 08:30:23.046731   10713 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 08:30:23.046764   10713 node_conditions.go:123] node cpu capacity is 8
	I1108 08:30:23.046781   10713 node_conditions.go:105] duration metric: took 2.678855ms to run NodePressure ...
	I1108 08:30:23.046792   10713 start.go:242] waiting for startup goroutines ...
	I1108 08:30:23.101647   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:23.230733   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:23.298743   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:23.299213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:23.600995   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:23.730250   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:23.830972   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:23.831082   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.101333   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:24.230137   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:24.298739   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.298796   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:24.601896   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:24.730163   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:24.830726   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:24.830746   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.101576   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:25.230839   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:25.298816   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:25.299410   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.600981   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:25.730166   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:25.799328   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:25.799449   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.101797   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:26.229984   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:26.299028   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:26.299149   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.600836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:26.729870   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:26.798804   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:26.799077   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:27.102047   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:27.230810   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:27.300249   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:27.302650   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:27.602968   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:27.730972   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:27.799356   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:27.799396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.101169   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:28.230019   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:28.298760   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.298837   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:28.601917   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:28.730176   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:28.799582   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:28.799612   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.100644   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:29.229734   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:29.298493   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.298877   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:29.601059   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:29.729846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:29.798724   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:29.799087   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:30.101828   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:30.230070   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:30.298595   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:30.298796   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:30.601614   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:30.731037   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:30.799322   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:30.799335   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.101104   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:31.229841   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:31.298663   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:31.299258   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.602030   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:31.730443   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:31.798836   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:31.798904   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.101790   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:32.230272   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:32.299521   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.299668   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:32.601803   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:32.730178   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:32.798852   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:32.801372   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.101589   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:33.230665   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:33.299154   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:33.299440   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.601013   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:33.730152   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:33.799139   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:33.799199   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.101153   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:34.230704   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.365150   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.365345   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:34.601035   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:34.730223   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:34.798521   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:34.799044   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:35.101620   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:35.230014   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.300126   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:35.300569   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.601112   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:35.729748   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:35.800353   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:35.800422   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.102074   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.229864   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.299040   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:36.299117   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.600965   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:36.729894   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:36.798875   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:36.799023   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:37.100834   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.229707   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.299292   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:37.299429   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.601545   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:37.729857   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:37.798878   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:37.799729   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.102742   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.230804   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.298376   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:38.298973   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:38.601081   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:38.730400   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:38.800154   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:38.800364   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.112218   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.231213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.298993   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.299041   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:39.600927   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:39.730533   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:39.799696   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:39.799888   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:40.101797   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.230088   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.298945   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.299110   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:40.602122   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:40.730623   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:40.799332   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:40.799373   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.100873   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.229998   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.298478   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:41.298520   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.601447   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:41.730856   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:41.798735   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:41.799156   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.210752   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.229220   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.298846   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.298955   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:42.600754   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:42.729598   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:42.799196   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:42.799228   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.102061   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.230017   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.298881   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.299102   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:43.601738   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:43.729774   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:43.831115   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:43.831165   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.101314   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.230422   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.299293   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.299515   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:44.601624   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:44.730370   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:44.798995   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:44.799053   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.100619   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.230325   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.298955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:45.299031   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.604652   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:45.730858   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:45.798600   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:45.799263   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.101813   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.229955   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.298516   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:46.298887   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.603599   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:46.731385   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:46.799846   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:46.799961   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1108 08:30:47.102734   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.230750   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.299407   10713 kapi.go:107] duration metric: took 1m4.503232429s to wait for kubernetes.io/minikube-addons=registry ...
	I1108 08:30:47.299514   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:47.601573   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:47.730705   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:47.799800   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.101844   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.230097   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.298833   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:48.601823   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:48.729938   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:48.798790   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.102356   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.229878   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.298430   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:49.601213   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:49.730502   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:49.799326   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.100770   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.229947   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.298502   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:50.601655   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:50.731619   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:50.834482   10713 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1108 08:30:51.102786   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.231234   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:51.299176   10713 kapi.go:107] duration metric: took 1m8.503662851s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1108 08:30:51.649312   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:51.730405   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.101809   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.229575   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:52.601218   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:52.730571   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.101737   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.229731   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:53.601251   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:53.730667   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.102018   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.230244   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:54.602184   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:54.730651   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.101086   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.230401   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:55.600628   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:55.729238   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.101722   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.230071   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:56.601204   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:56.730396   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.169811   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.272523   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:57.601608   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1108 08:30:57.731063   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.101459   10713 kapi.go:107] duration metric: took 1m8.50347504s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1108 08:30:58.103241   10713 out.go:179] * Your GCP credentials will now be mounted into every pod created in the addons-758852 cluster.
	I1108 08:30:58.104479   10713 out.go:179] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1108 08:30:58.105670   10713 out.go:179] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1108 08:30:58.230453   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:58.729231   10713 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1108 08:30:59.230736   10713 kapi.go:107] duration metric: took 1m16.004136907s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1108 08:30:59.232401   10713 out.go:179] * Enabled addons: amd-gpu-device-plugin, inspektor-gadget, ingress-dns, storage-provisioner, nvidia-device-plugin, cloud-spanner, registry-creds, yakd, storage-provisioner-rancher, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1108 08:30:59.233533   10713 addons.go:515] duration metric: took 1m17.946466312s for enable addons: enabled=[amd-gpu-device-plugin inspektor-gadget ingress-dns storage-provisioner nvidia-device-plugin cloud-spanner registry-creds yakd storage-provisioner-rancher metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1108 08:30:59.233573   10713 start.go:247] waiting for cluster config update ...
	I1108 08:30:59.233601   10713 start.go:256] writing updated cluster config ...
	I1108 08:30:59.233863   10713 ssh_runner.go:195] Run: rm -f paused
	I1108 08:30:59.237745   10713 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:30:59.240646   10713 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-6cwbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.244537   10713 pod_ready.go:94] pod "coredns-66bc5c9577-6cwbz" is "Ready"
	I1108 08:30:59.244559   10713 pod_ready.go:86] duration metric: took 3.893202ms for pod "coredns-66bc5c9577-6cwbz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.246209   10713 pod_ready.go:83] waiting for pod "etcd-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.249376   10713 pod_ready.go:94] pod "etcd-addons-758852" is "Ready"
	I1108 08:30:59.249397   10713 pod_ready.go:86] duration metric: took 3.169952ms for pod "etcd-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.251010   10713 pod_ready.go:83] waiting for pod "kube-apiserver-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.254134   10713 pod_ready.go:94] pod "kube-apiserver-addons-758852" is "Ready"
	I1108 08:30:59.254150   10713 pod_ready.go:86] duration metric: took 3.119361ms for pod "kube-apiserver-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.255802   10713 pod_ready.go:83] waiting for pod "kube-controller-manager-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.641318   10713 pod_ready.go:94] pod "kube-controller-manager-addons-758852" is "Ready"
	I1108 08:30:59.641357   10713 pod_ready.go:86] duration metric: took 385.535714ms for pod "kube-controller-manager-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:30:59.840706   10713 pod_ready.go:83] waiting for pod "kube-proxy-fkvsn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.241029   10713 pod_ready.go:94] pod "kube-proxy-fkvsn" is "Ready"
	I1108 08:31:00.241056   10713 pod_ready.go:86] duration metric: took 400.324804ms for pod "kube-proxy-fkvsn" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.441893   10713 pod_ready.go:83] waiting for pod "kube-scheduler-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.841234   10713 pod_ready.go:94] pod "kube-scheduler-addons-758852" is "Ready"
	I1108 08:31:00.841263   10713 pod_ready.go:86] duration metric: took 399.34376ms for pod "kube-scheduler-addons-758852" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 08:31:00.841276   10713 pod_ready.go:40] duration metric: took 1.603501971s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 08:31:00.884824   10713 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 08:31:00.886921   10713 out.go:179] * Done! kubectl is now configured to use "addons-758852" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 08:30:58 addons-758852 crio[773]: time="2025-11-08T08:30:58.050246315Z" level=info msg="Starting container: f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a" id=50361b77-1569-43b7-8735-ba6bfacc0f38 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 08:30:58 addons-758852 crio[773]: time="2025-11-08T08:30:58.052775947Z" level=info msg="Started container" PID=6118 containerID=f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a description=kube-system/csi-hostpathplugin-rtgg7/csi-snapshotter id=50361b77-1569-43b7-8735-ba6bfacc0f38 name=/runtime.v1.RuntimeService/StartContainer sandboxID=b35b019d9d6ff325309143a447c22213ae01e7740444dfaa4be486de054b454b
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.715263056Z" level=info msg="Running pod sandbox: default/busybox/POD" id=59cf8585-832c-46ca-8919-5db82cb41e02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.715386117Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.722930121Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:de20eb6ae573a77879e45adc60f9d56976efed6f9343104459605fa1f7dbbcaf UID:850742cc-4864-4985-838b-99ba86e8a88f NetNS:/var/run/netns/0f0db9d3-9a82-42df-be92-c64f8a9d045d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d16198}] Aliases:map[]}"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.722961848Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.73271652Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:de20eb6ae573a77879e45adc60f9d56976efed6f9343104459605fa1f7dbbcaf UID:850742cc-4864-4985-838b-99ba86e8a88f NetNS:/var/run/netns/0f0db9d3-9a82-42df-be92-c64f8a9d045d Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000d16198}] Aliases:map[]}"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.732841266Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.733732446Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.734538022Z" level=info msg="Ran pod sandbox de20eb6ae573a77879e45adc60f9d56976efed6f9343104459605fa1f7dbbcaf with infra container: default/busybox/POD" id=59cf8585-832c-46ca-8919-5db82cb41e02 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.736008811Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=bc5edb8c-9df7-46bd-95a6-dba9e47702d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.736160018Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=bc5edb8c-9df7-46bd-95a6-dba9e47702d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.736205839Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=bc5edb8c-9df7-46bd-95a6-dba9e47702d2 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.736788887Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0a68b289-d38c-4400-98e0-1222d2f8a099 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:31:01 addons-758852 crio[773]: time="2025-11-08T08:31:01.73849289Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.070417131Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=0a68b289-d38c-4400-98e0-1222d2f8a099 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.070934935Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=21d82296-9078-43e9-9229-aed973fc4072 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.072312813Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=ab51d3c3-8a7b-4b16-b2d5-55ed342d914d name=/runtime.v1.ImageService/ImageStatus
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.075880255Z" level=info msg="Creating container: default/busybox/busybox" id=ef553537-863a-4ff6-893a-662711a42137 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.076002508Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.081878659Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.08227081Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.114589815Z" level=info msg="Created container 512aa15697e55952741bcb339e13c101d84fc57fdfd1de3f3b207c640cbc9b0f: default/busybox/busybox" id=ef553537-863a-4ff6-893a-662711a42137 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.115171244Z" level=info msg="Starting container: 512aa15697e55952741bcb339e13c101d84fc57fdfd1de3f3b207c640cbc9b0f" id=3bd8e7d1-9360-4697-92b3-dfe7cee0a739 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 08:31:03 addons-758852 crio[773]: time="2025-11-08T08:31:03.116889297Z" level=info msg="Started container" PID=6234 containerID=512aa15697e55952741bcb339e13c101d84fc57fdfd1de3f3b207c640cbc9b0f description=default/busybox/busybox id=3bd8e7d1-9360-4697-92b3-dfe7cee0a739 name=/runtime.v1.RuntimeService/StartContainer sandboxID=de20eb6ae573a77879e45adc60f9d56976efed6f9343104459605fa1f7dbbcaf
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID              POD                                         NAMESPACE
	512aa15697e55       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998                                          8 seconds ago        Running             busybox                                  0                   de20eb6ae573a       busybox                                     default
	f34be8782c294       registry.k8s.io/sig-storage/csi-snapshotter@sha256:d844cb1faeb4ecf44bae6aea370c9c6128a87e665e40370021427d79a8819ee5                          13 seconds ago       Running             csi-snapshotter                          0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	ad24fc3016e0b       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:441f351b4520c228d29ba8c02a438d9ba971dafbbba5c91eaf882b1528797fb8                                 13 seconds ago       Running             gcp-auth                                 0                   1c457cc7e176a       gcp-auth-78565c9fb4-99tsv                   gcp-auth
	66198912dbb4c       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          16 seconds ago       Running             csi-provisioner                          0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	ef0ec581e5d71       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            17 seconds ago       Running             liveness-probe                           0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	83841cdc49661       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           17 seconds ago       Running             hostpath                                 0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	5204be461b8fb       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:9a12b3c1d155bb081ff408a9b6c1cec18573c967e0c3917225b81ffe11c0b7f2                            18 seconds ago       Running             gadget                                   0                   c3eb4b38ebd8c       gadget-jb2ln                                gadget
	f340f0145eb9b       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                20 seconds ago       Running             node-driver-registrar                    0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	b0fd0c2b4b9a8       registry.k8s.io/ingress-nginx/controller@sha256:7b4073fc95e078d863c0b0b08deb72e01d2cf629e2156822bcd394fc2bcd8e83                             21 seconds ago       Running             controller                               0                   6a0888bf2f3a7       ingress-nginx-controller-675c5ddd98-qd9l6   ingress-nginx
	10f4c3a3e2558       gcr.io/k8s-minikube/kube-registry-proxy@sha256:8f72a79b63ca56074435e82b87fca2642a8117e60be313d3586dbe2bfff11cac                              24 seconds ago       Running             registry-proxy                           0                   d1aec8411ff96       registry-proxy-j697c                        kube-system
	db7058dc33833       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   26 seconds ago       Running             csi-external-health-monitor-controller   0                   b35b019d9d6ff       csi-hostpathplugin-rtgg7                    kube-system
	f00a5461baa14       08cfe302feafeabe4c2747ba112aa93917a7468cdd19a8835b48eb2ac88a7bf2                                                                             27 seconds ago       Exited              patch                                    1                   f9194bc7f5dd0       gcp-auth-certs-patch-58q98                  gcp-auth
	6d772d3f55810       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   27 seconds ago       Exited              create                                   0                   502add19cd417       gcp-auth-certs-create-fkkhc                 gcp-auth
	1aaad9983441a       docker.io/rocm/k8s-device-plugin@sha256:f3835498cf2274e0a07c32b38c166c05a876f8eb776d756cc06805e599a3ba5f                                     27 seconds ago       Running             amd-gpu-device-plugin                    0                   1ad6a7975c94b       amd-gpu-device-plugin-fgsj6                 kube-system
	8aabc952ff686       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      28 seconds ago       Running             volume-snapshot-controller               0                   8645eeba25397       snapshot-controller-7d9fbc56b8-vkhw9        kube-system
	07cf5a2c38f59       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              28 seconds ago       Running             csi-resizer                              0                   abbc1bc3c0751       csi-hostpath-resizer-0                      kube-system
	db1083da29dce       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      30 seconds ago       Running             volume-snapshot-controller               0                   b4566a4fb863c       snapshot-controller-7d9fbc56b8-8dlk9        kube-system
	b2bfae5b5011c       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             30 seconds ago       Running             csi-attacher                             0                   ecfa30768cd66       csi-hostpath-attacher-0                     kube-system
	88464ad8c8a6f       nvcr.io/nvidia/k8s-device-plugin@sha256:20db699f1480b6f37423cab909e9c6df5a4fdbd981b405e0d72f00a86fee5100                                     31 seconds ago       Running             nvidia-device-plugin-ctr                 0                   635e2616172ae       nvidia-device-plugin-daemonset-tzbp6        kube-system
	b19a1caa72578       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   34 seconds ago       Exited              patch                                    0                   e9df9fb0dde82       ingress-nginx-admission-patch-49bbt         ingress-nginx
	b58739220d7fd       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:603a4996fc2ece451c708708e2881a855991cda47ddca5a4458b69a04f48d7f2                   34 seconds ago       Exited              create                                   0                   c4fb5dd85b31a       ingress-nginx-admission-create-t2bkq        ingress-nginx
	6df5c42a3809d       docker.io/library/registry@sha256:f57ffd2bb01704b6082396158e77ca6d1112bc6fe32315c322864de804750d8a                                           35 seconds ago       Running             registry                                 0                   d1d34eec0eb2e       registry-6b586f9694-8mkgh                   kube-system
	9ede2f18a3c3e       docker.io/marcnuri/yakd@sha256:8ebd1692ed5271719f13b728d9af7acb839aa04821e931c8993d908ad68b69fd                                              36 seconds ago       Running             yakd                                     0                   293c8645f4c7a       yakd-dashboard-5ff678cb9-v2brq              yakd-dashboard
	f542c8d2df432       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef                             38 seconds ago       Running             local-path-provisioner                   0                   a96b103013a08       local-path-provisioner-648f6765c9-6h2gs     local-path-storage
	f8285831ae530       docker.io/kicbase/minikube-ingress-dns@sha256:a0cc6cd76812357245a51bb05fabcd346a616c880e40ca4e0c8c8253912eaae7                               39 seconds ago       Running             minikube-ingress-dns                     0                   7f0d658d7cce5       kube-ingress-dns-minikube                   kube-system
	c5530737e9c19       gcr.io/cloud-spanner-emulator/emulator@sha256:7360f5c5ff4b89d75592d9585fc2d59d207b08ccf262a84edfe79ee0613a7099                               44 seconds ago       Running             cloud-spanner-emulator                   0                   d06fae3dfc55f       cloud-spanner-emulator-6f9fcf858b-j98cr     default
	af0574068f104       registry.k8s.io/metrics-server/metrics-server@sha256:5dd31abb8093690d9624a53277a00d2257e7e57e6766be3f9f54cf9f54cddbc1                        47 seconds ago       Running             metrics-server                           0                   8159f2ec93496       metrics-server-85b7d694d7-g65zk             kube-system
	a616ef6928972       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                                             48 seconds ago       Running             coredns                                  0                   3faffebd8a2fe       coredns-66bc5c9577-6cwbz                    kube-system
	76b41f4794cf9       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             48 seconds ago       Running             storage-provisioner                      0                   883e216febd78       storage-provisioner                         kube-system
	10b7c804477d9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                                             About a minute ago   Running             kindnet-cni                              0                   4b3609e12c475       kindnet-6qtgf                               kube-system
	f2b09aff0e553       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                                             About a minute ago   Running             kube-proxy                               0                   d63870490ba4a       kube-proxy-fkvsn                            kube-system
	e08d383ff6705       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                                             About a minute ago   Running             kube-apiserver                           0                   a68527097af7c       kube-apiserver-addons-758852                kube-system
	8e136e1e55dba       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                                             About a minute ago   Running             kube-controller-manager                  0                   5bae20e97ddbc       kube-controller-manager-addons-758852       kube-system
	ee1613ab5f8f0       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                                             About a minute ago   Running             etcd                                     0                   f8fad627cf760       etcd-addons-758852                          kube-system
	61e01b287696c       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                                             About a minute ago   Running             kube-scheduler                           0                   0e512812e48d1       kube-scheduler-addons-758852                kube-system
	
	
	==> coredns [a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c] <==
	[INFO] 10.244.0.18:45280 - 60143 "A IN registry.kube-system.svc.cluster.local.local. udp 62 false 512" NXDOMAIN qr,rd,ra 62 0.003545581s
	[INFO] 10.244.0.18:48591 - 7274 "AAAA IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000058951s
	[INFO] 10.244.0.18:48591 - 6930 "A IN registry.kube-system.svc.cluster.local.us-east4-a.c.k8s-minikube.internal. udp 91 false 512" NXDOMAIN qr,aa,rd,ra 198 0.000091363s
	[INFO] 10.244.0.18:32910 - 18879 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.000069768s
	[INFO] 10.244.0.18:32910 - 18662 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,aa,rd,ra 185 0.00010375s
	[INFO] 10.244.0.18:52515 - 24747 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000049472s
	[INFO] 10.244.0.18:52515 - 24431 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,aa,rd,ra 177 0.000056684s
	[INFO] 10.244.0.18:49038 - 30657 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00011668s
	[INFO] 10.244.0.18:49038 - 30253 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000152645s
	[INFO] 10.244.0.22:50552 - 33148 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000210251s
	[INFO] 10.244.0.22:48129 - 50133 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000191031s
	[INFO] 10.244.0.22:38848 - 38968 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000116852s
	[INFO] 10.244.0.22:40337 - 24153 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000109992s
	[INFO] 10.244.0.22:43605 - 9442 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000132176s
	[INFO] 10.244.0.22:58534 - 37415 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000163076s
	[INFO] 10.244.0.22:40931 - 45084 "A IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.005401676s
	[INFO] 10.244.0.22:58073 - 13754 "AAAA IN storage.googleapis.com.local. udp 57 false 1232" NXDOMAIN qr,rd,ra 46 0.006299983s
	[INFO] 10.244.0.22:53456 - 8784 "A IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00705468s
	[INFO] 10.244.0.22:33291 - 10676 "AAAA IN storage.googleapis.com.us-east4-a.c.k8s-minikube.internal. udp 86 false 1232" NXDOMAIN qr,rd,ra 182 0.00705789s
	[INFO] 10.244.0.22:43067 - 5658 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.004801683s
	[INFO] 10.244.0.22:59546 - 63091 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 169 0.006093941s
	[INFO] 10.244.0.22:36763 - 42697 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004409635s
	[INFO] 10.244.0.22:38420 - 57096 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 161 0.004921924s
	[INFO] 10.244.0.22:45256 - 31406 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 534 0.001965851s
	[INFO] 10.244.0.22:50115 - 36738 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002108482s
	
	
	==> describe nodes <==
	Name:               addons-758852
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-758852
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=addons-758852
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T08_29_36_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-758852
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-758852"}
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 08:29:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-758852
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 08:31:07 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 08:31:07 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 08:31:07 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 08:31:07 +0000   Sat, 08 Nov 2025 08:29:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 08:31:07 +0000   Sat, 08 Nov 2025 08:30:22 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-758852
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                7d4bd929-f477-47c1-b3ca-97cfa03ee98a
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (27 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  default                     cloud-spanner-emulator-6f9fcf858b-j98cr      0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gadget                      gadget-jb2ln                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  gcp-auth                    gcp-auth-78565c9fb4-99tsv                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         82s
	  ingress-nginx               ingress-nginx-controller-675c5ddd98-qd9l6    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         89s
	  kube-system                 amd-gpu-device-plugin-fgsj6                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 coredns-66bc5c9577-6cwbz                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     90s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 csi-hostpathplugin-rtgg7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 etcd-addons-758852                           100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         96s
	  kube-system                 kindnet-6qtgf                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      91s
	  kube-system                 kube-apiserver-addons-758852                 250m (3%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-controller-manager-addons-758852        200m (2%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 kube-proxy-fkvsn                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 kube-scheduler-addons-758852                 100m (1%)     0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 metrics-server-85b7d694d7-g65zk              100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         89s
	  kube-system                 nvidia-device-plugin-daemonset-tzbp6         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 registry-6b586f9694-8mkgh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-creds-764b6fb674-rjbxd              0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  kube-system                 registry-proxy-j697c                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         49s
	  kube-system                 snapshot-controller-7d9fbc56b8-8dlk9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 snapshot-controller-7d9fbc56b8-vkhw9         0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  local-path-storage          local-path-provisioner-648f6765c9-6h2gs      0 (0%)        0 (0%)      0 (0%)           0 (0%)         89s
	  yakd-dashboard              yakd-dashboard-5ff678cb9-v2brq               0 (0%)        0 (0%)      128Mi (0%)       256Mi (0%)     89s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             638Mi (1%)   476Mi (1%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 89s   kube-proxy       
	  Normal  Starting                 96s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  96s   kubelet          Node addons-758852 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    96s   kubelet          Node addons-758852 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     96s   kubelet          Node addons-758852 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           91s   node-controller  Node addons-758852 event: Registered Node addons-758852 in Controller
	  Normal  NodeReady                49s   kubelet          Node addons-758852 status is now: NodeReady
	
	
	==> dmesg <==
	[Nov 8 08:17] MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details.
	[  +0.001671] TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details.
	[  +0.001000] MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details.
	[  +0.086011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge
	[  +0.402780] i8042: Warning: Keylock active
	[  +0.011721] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.510667] block sda: the capability attribute has been deprecated.
	[  +0.084884] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.205659] kauditd_printk_skb: 47 callbacks suppressed
	
	
	==> etcd [ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268] <==
	{"level":"warn","ts":"2025-11-08T08:29:32.622206Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41476","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.627891Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41498","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.633799Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:32.683842Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:41530","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:43.715183Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:29:43.721538Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:34170","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.078212Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37208","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.084785Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37218","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:10.105172Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:37248","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:30:34.363836Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"102.90119ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:34.363954Z","caller":"traceutil/trace.go:172","msg":"trace[984066840] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:990; }","duration":"103.044205ms","start":"2025-11-08T08:30:34.260896Z","end":"2025-11-08T08:30:34.363940Z","steps":["trace[984066840] 'range keys from in-memory index tree'  (duration: 102.863043ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:39.151960Z","caller":"traceutil/trace.go:172","msg":"trace[1019913915] transaction","detail":"{read_only:false; response_revision:1026; number_of_response:1; }","duration":"116.50412ms","start":"2025-11-08T08:30:39.035428Z","end":"2025-11-08T08:30:39.151932Z","steps":["trace[1019913915] 'process raft request'  (duration: 116.394617ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:30:42.208263Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"225.851125ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiextensions.k8s.io/customresourcedefinitions\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.208354Z","caller":"traceutil/trace.go:172","msg":"trace[1773635745] range","detail":"{range_begin:/registry/apiextensions.k8s.io/customresourcedefinitions; range_end:; response_count:0; response_revision:1054; }","duration":"225.957942ms","start":"2025-11-08T08:30:41.982382Z","end":"2025-11-08T08:30:42.208340Z","steps":["trace[1773635745] 'agreement among raft nodes before linearized reading'  (duration: 92.228267ms)","trace[1773635745] 'range keys from in-memory index tree'  (duration: 133.599745ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:30:42.208982Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"133.778548ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128041175706319561 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/deployments/kube-system/snapshot-controller\" mod_revision:700 > success:<request_put:<key:\"/registry/deployments/kube-system/snapshot-controller\" value_size:3313 >> failure:<request_range:<key:\"/registry/deployments/kube-system/snapshot-controller\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T08:30:42.209037Z","caller":"traceutil/trace.go:172","msg":"trace[1497930883] linearizableReadLoop","detail":"{readStateIndex:1082; appliedIndex:1081; }","duration":"134.434586ms","start":"2025-11-08T08:30:42.074593Z","end":"2025-11-08T08:30:42.209027Z","steps":["trace[1497930883] 'read index received'  (duration: 29.495µs)","trace[1497930883] 'applied index is now lower than readState.Index'  (duration: 134.404184ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T08:30:42.209068Z","caller":"traceutil/trace.go:172","msg":"trace[623641400] transaction","detail":"{read_only:false; response_revision:1055; number_of_response:1; }","duration":"244.601489ms","start":"2025-11-08T08:30:41.964445Z","end":"2025-11-08T08:30:42.209047Z","steps":["trace[623641400] 'process raft request'  (duration: 110.159929ms)","trace[623641400] 'compare'  (duration: 133.644022ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T08:30:42.209116Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"184.512151ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-08T08:30:42.209122Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"176.809522ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.209155Z","caller":"traceutil/trace.go:172","msg":"trace[1929175426] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1055; }","duration":"176.845827ms","start":"2025-11-08T08:30:42.032301Z","end":"2025-11-08T08:30:42.209147Z","steps":["trace[1929175426] 'agreement among raft nodes before linearized reading'  (duration: 176.781607ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:30:42.209172Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"109.358891ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:30:42.209135Z","caller":"traceutil/trace.go:172","msg":"trace[1943306984] range","detail":"{range_begin:/registry/csinodes; range_end:; response_count:0; response_revision:1055; }","duration":"184.534166ms","start":"2025-11-08T08:30:42.024595Z","end":"2025-11-08T08:30:42.209129Z","steps":["trace[1943306984] 'agreement among raft nodes before linearized reading'  (duration: 184.490792ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:42.209188Z","caller":"traceutil/trace.go:172","msg":"trace[611731499] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:1055; }","duration":"109.375055ms","start":"2025-11-08T08:30:42.099808Z","end":"2025-11-08T08:30:42.209183Z","steps":["trace[611731499] 'agreement among raft nodes before linearized reading'  (duration: 109.345185ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:57.168413Z","caller":"traceutil/trace.go:172","msg":"trace[2029798354] transaction","detail":"{read_only:false; response_revision:1177; number_of_response:1; }","duration":"108.020124ms","start":"2025-11-08T08:30:57.060376Z","end":"2025-11-08T08:30:57.168396Z","steps":["trace[2029798354] 'process raft request'  (duration: 106.252492ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:30:57.169598Z","caller":"traceutil/trace.go:172","msg":"trace[1210832985] transaction","detail":"{read_only:false; response_revision:1178; number_of_response:1; }","duration":"105.74415ms","start":"2025-11-08T08:30:57.063841Z","end":"2025-11-08T08:30:57.169585Z","steps":["trace[1210832985] 'process raft request'  (duration: 105.667943ms)"],"step_count":1}
	
	
	==> gcp-auth [ad24fc3016e0b3eb6344f22c175b7d28a097ad9fda49713783997fcc2a9fba3f] <==
	2025/11/08 08:30:57 GCP Auth Webhook started!
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	2025/11/08 08:31:01 Ready to marshal response ...
	2025/11/08 08:31:01 Ready to write response ...
	
	
	==> kernel <==
	 08:31:11 up 13 min,  0 user,  load average: 1.05, 0.54, 0.21
	Linux addons-758852 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3] <==
	I1108 08:29:41.524620       1 main.go:148] setting mtu 1500 for CNI 
	I1108 08:29:41.524645       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 08:29:41.524669       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T08:29:41Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 08:29:41.802764       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 08:29:41.802800       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 08:29:41.802812       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 08:29:41.805400       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 08:30:11.803638       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	E1108 08:30:11.805656       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 08:30:11.805695       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 08:30:11.882238       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	I1108 08:30:13.502903       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 08:30:13.502935       1 metrics.go:72] Registering metrics
	I1108 08:30:13.503000       1 controller.go:711] "Syncing nftables rules"
	I1108 08:30:21.720351       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:30:21.720409       1 main.go:301] handling current node
	I1108 08:30:31.714489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:30:31.714537       1 main.go:301] handling current node
	I1108 08:30:41.714958       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:30:41.715009       1 main.go:301] handling current node
	I1108 08:30:51.714403       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:30:51.714456       1 main.go:301] handling current node
	I1108 08:31:01.715104       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:31:01.715136       1 main.go:301] handling current node
	
	
	==> kube-apiserver [e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34] <==
	W1108 08:30:10.078126       1 logging.go:55] [core] [Channel #270 SubChannel #271]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 08:30:10.084712       1 logging.go:55] [core] [Channel #274 SubChannel #275]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 08:30:10.098172       1 logging.go:55] [core] [Channel #278 SubChannel #279]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 08:30:10.105130       1 logging.go:55] [core] [Channel #282 SubChannel #283]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", BalancerAttributes: {"<%!p(pickfirstleaf.managedByPickfirstKeyType={})>": "<%!p(bool=true)>" }}. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: operation was canceled"
	W1108 08:30:22.247382       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.247427       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.247604       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.247637       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.265597       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.265709       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:22.272340       1 dispatcher.go:210] Failed calling webhook, failing open gcp-auth-mutate.k8s.io: failed calling webhook "gcp-auth-mutate.k8s.io": failed to call webhook: Post "https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s": dial tcp 10.103.246.121:443: connect: connection refused
	E1108 08:30:22.272379       1 dispatcher.go:214] "Unhandled Error" err="failed calling webhook \"gcp-auth-mutate.k8s.io\": failed to call webhook: Post \"https://gcp-auth.gcp-auth.svc:443/mutate?timeout=10s\": dial tcp 10.103.246.121:443: connect: connection refused" logger="UnhandledError"
	W1108 08:30:25.778639       1 handler_proxy.go:99] no RequestInfo found in the context
	E1108 08:30:25.778698       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	E1108 08:30:25.779264       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.780455       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.785749       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.806546       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	E1108 08:30:25.848173       1 remote_available_controller.go:462] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.106.200.236:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.106.200.236:443: connect: connection refused" logger="UnhandledError"
	I1108 08:30:25.960345       1 handler.go:285] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1108 08:31:09.547356       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38272: use of closed network connection
	E1108 08:31:09.692666       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:38306: use of closed network connection
	
	
	==> kube-controller-manager [8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6] <==
	I1108 08:29:40.063352       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 08:29:40.063464       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 08:29:40.063716       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 08:29:40.063745       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 08:29:40.063769       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 08:29:40.064063       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 08:29:40.064526       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 08:29:40.066002       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 08:29:40.066046       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 08:29:40.066087       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 08:29:40.068236       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:29:40.069317       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 08:29:40.069366       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:29:40.071638       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 08:29:40.076884       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 08:29:40.082142       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 08:29:42.658752       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/metrics-server-85b7d694d7\" failed with pods \"metrics-server-85b7d694d7-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" logger="UnhandledError"
	E1108 08:30:10.072951       1 resource_quota_controller.go:446] "Unhandled Error" err="unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1" logger="UnhandledError"
	I1108 08:30:10.073096       1 resource_quota_monitor.go:227] "QuotaMonitor created object count evaluator" logger="resourcequota-controller" resource="volumesnapshots.snapshot.storage.k8s.io"
	I1108 08:30:10.073148       1 shared_informer.go:349] "Waiting for caches to sync" controller="resource quota"
	I1108 08:30:10.089440       1 garbagecollector.go:787] "failed to discover some groups" logger="garbage-collector-controller" groups="map[\"metrics.k8s.io/v1beta1\":\"stale GroupVersion discovery: metrics.k8s.io/v1beta1\"]"
	I1108 08:30:10.093242       1 shared_informer.go:349] "Waiting for caches to sync" controller="garbage collector"
	I1108 08:30:10.174139       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:30:10.193700       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 08:30:25.070269       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968] <==
	I1108 08:29:41.276526       1 server_linux.go:53] "Using iptables proxy"
	I1108 08:29:41.445177       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 08:29:41.546235       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 08:29:41.548844       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 08:29:41.549947       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 08:29:41.594700       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 08:29:41.594831       1 server_linux.go:132] "Using iptables Proxier"
	I1108 08:29:41.602471       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 08:29:41.609462       1 server.go:527] "Version info" version="v1.34.1"
	I1108 08:29:41.609901       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 08:29:41.611908       1 config.go:200] "Starting service config controller"
	I1108 08:29:41.611971       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 08:29:41.612016       1 config.go:106] "Starting endpoint slice config controller"
	I1108 08:29:41.612043       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 08:29:41.612075       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 08:29:41.612101       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 08:29:41.612771       1 config.go:309] "Starting node config controller"
	I1108 08:29:41.612825       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 08:29:41.714457       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 08:29:41.714514       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 08:29:41.716328       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 08:29:41.718343       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792] <==
	E1108 08:29:33.076751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 08:29:33.076806       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:29:33.076886       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:29:33.076911       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 08:29:33.076909       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 08:29:33.076930       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 08:29:33.076963       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 08:29:33.077010       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 08:29:33.077054       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:29:33.077069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:29:33.077088       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 08:29:33.077166       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 08:29:33.899459       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:29:33.908624       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 08:29:33.914488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 08:29:33.952824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:29:33.973186       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:29:33.994389       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 08:29:34.111036       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 08:29:34.188517       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:29:34.234589       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 08:29:34.268672       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 08:29:34.286175       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 08:29:34.316274       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1108 08:29:34.573262       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 08:30:43 addons-758852 kubelet[1286]: I1108 08:30:43.862436    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/amd-gpu-device-plugin-fgsj6" podStartSLOduration=1.047678571 podStartE2EDuration="21.86241772s" podCreationTimestamp="2025-11-08 08:30:22 +0000 UTC" firstStartedPulling="2025-11-08 08:30:22.683954622 +0000 UTC m=+47.154192044" lastFinishedPulling="2025-11-08 08:30:43.498693772 +0000 UTC m=+67.968931193" observedRunningTime="2025-11-08 08:30:43.861568986 +0000 UTC m=+68.331806427" watchObservedRunningTime="2025-11-08 08:30:43.86241772 +0000 UTC m=+68.332655163"
	Nov 08 08:30:44 addons-758852 kubelet[1286]: I1108 08:30:44.843994    1286 scope.go:117] "RemoveContainer" containerID="388d4e98565ee06caa06213b11f751e26d01c3e43ea7c52376649f8e5cbf27a7"
	Nov 08 08:30:44 addons-758852 kubelet[1286]: I1108 08:30:44.844276    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/amd-gpu-device-plugin-fgsj6" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:30:44 addons-758852 kubelet[1286]: I1108 08:30:44.962460    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frj4h\" (UniqueName: \"kubernetes.io/projected/d668e3fc-d4b4-4f4b-9eb5-51bec1e42405-kube-api-access-frj4h\") pod \"d668e3fc-d4b4-4f4b-9eb5-51bec1e42405\" (UID: \"d668e3fc-d4b4-4f4b-9eb5-51bec1e42405\") "
	Nov 08 08:30:44 addons-758852 kubelet[1286]: I1108 08:30:44.964595    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d668e3fc-d4b4-4f4b-9eb5-51bec1e42405-kube-api-access-frj4h" (OuterVolumeSpecName: "kube-api-access-frj4h") pod "d668e3fc-d4b4-4f4b-9eb5-51bec1e42405" (UID: "d668e3fc-d4b4-4f4b-9eb5-51bec1e42405"). InnerVolumeSpecName "kube-api-access-frj4h". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 08:30:45 addons-758852 kubelet[1286]: I1108 08:30:45.063037    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frj4h\" (UniqueName: \"kubernetes.io/projected/d668e3fc-d4b4-4f4b-9eb5-51bec1e42405-kube-api-access-frj4h\") on node \"addons-758852\" DevicePath \"\""
	Nov 08 08:30:45 addons-758852 kubelet[1286]: I1108 08:30:45.848461    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502add19cd417e98e07a520599c659a0654b9339cbc6ef132f67bf219a7b8b4d"
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.270459    1286 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v29fq\" (UniqueName: \"kubernetes.io/projected/b7b9720d-ed15-40a1-a4a1-e07232e342b7-kube-api-access-v29fq\") pod \"b7b9720d-ed15-40a1-a4a1-e07232e342b7\" (UID: \"b7b9720d-ed15-40a1-a4a1-e07232e342b7\") "
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.272979    1286 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7b9720d-ed15-40a1-a4a1-e07232e342b7-kube-api-access-v29fq" (OuterVolumeSpecName: "kube-api-access-v29fq") pod "b7b9720d-ed15-40a1-a4a1-e07232e342b7" (UID: "b7b9720d-ed15-40a1-a4a1-e07232e342b7"). InnerVolumeSpecName "kube-api-access-v29fq". PluginName "kubernetes.io/projected", VolumeGIDValue ""
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.371494    1286 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v29fq\" (UniqueName: \"kubernetes.io/projected/b7b9720d-ed15-40a1-a4a1-e07232e342b7-kube-api-access-v29fq\") on node \"addons-758852\" DevicePath \"\""
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.859179    1286 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f9194bc7f5dd055118e05f5eba9b292d3048cc8dceb8e74b4eea81a70d5d8667"
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.864451    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j697c" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:30:46 addons-758852 kubelet[1286]: I1108 08:30:46.887938    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/registry-proxy-j697c" podStartSLOduration=1.298597195 podStartE2EDuration="24.887915041s" podCreationTimestamp="2025-11-08 08:30:22 +0000 UTC" firstStartedPulling="2025-11-08 08:30:22.71168077 +0000 UTC m=+47.181918191" lastFinishedPulling="2025-11-08 08:30:46.300998605 +0000 UTC m=+70.771236037" observedRunningTime="2025-11-08 08:30:46.880805443 +0000 UTC m=+71.351042886" watchObservedRunningTime="2025-11-08 08:30:46.887915041 +0000 UTC m=+71.358152484"
	Nov 08 08:30:47 addons-758852 kubelet[1286]: I1108 08:30:47.867434    1286 kubelet_pods.go:1082] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/registry-proxy-j697c" secret="" err="secret \"gcp-auth\" not found"
	Nov 08 08:30:50 addons-758852 kubelet[1286]: I1108 08:30:50.891490    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="ingress-nginx/ingress-nginx-controller-675c5ddd98-qd9l6" podStartSLOduration=58.1122056 podStartE2EDuration="1m8.891472406s" podCreationTimestamp="2025-11-08 08:29:42 +0000 UTC" firstStartedPulling="2025-11-08 08:30:39.325958319 +0000 UTC m=+63.796195740" lastFinishedPulling="2025-11-08 08:30:50.105225108 +0000 UTC m=+74.575462546" observedRunningTime="2025-11-08 08:30:50.890437193 +0000 UTC m=+75.360674635" watchObservedRunningTime="2025-11-08 08:30:50.891472406 +0000 UTC m=+75.361709847"
	Nov 08 08:30:52 addons-758852 kubelet[1286]: I1108 08:30:52.906167    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gadget/gadget-jb2ln" podStartSLOduration=64.736120619 podStartE2EDuration="1m10.906146687s" podCreationTimestamp="2025-11-08 08:29:42 +0000 UTC" firstStartedPulling="2025-11-08 08:30:46.641250519 +0000 UTC m=+71.111487952" lastFinishedPulling="2025-11-08 08:30:52.811276592 +0000 UTC m=+77.281514020" observedRunningTime="2025-11-08 08:30:52.905973306 +0000 UTC m=+77.376210749" watchObservedRunningTime="2025-11-08 08:30:52.906146687 +0000 UTC m=+77.376384111"
	Nov 08 08:30:54 addons-758852 kubelet[1286]: E1108 08:30:54.132471    1286 secret.go:189] Couldn't get secret kube-system/registry-creds-gcr: secret "registry-creds-gcr" not found
	Nov 08 08:30:54 addons-758852 kubelet[1286]: E1108 08:30:54.132557    1286 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6574dc0f-978b-434f-99a1-1452a69af882-gcr-creds podName:6574dc0f-978b-434f-99a1-1452a69af882 nodeName:}" failed. No retries permitted until 2025-11-08 08:31:26.132541307 +0000 UTC m=+110.602778743 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "gcr-creds" (UniqueName: "kubernetes.io/secret/6574dc0f-978b-434f-99a1-1452a69af882-gcr-creds") pod "registry-creds-764b6fb674-rjbxd" (UID: "6574dc0f-978b-434f-99a1-1452a69af882") : secret "registry-creds-gcr" not found
	Nov 08 08:30:54 addons-758852 kubelet[1286]: I1108 08:30:54.670006    1286 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: hostpath.csi.k8s.io endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0
	Nov 08 08:30:54 addons-758852 kubelet[1286]: I1108 08:30:54.670053    1286 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: hostpath.csi.k8s.io at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock
	Nov 08 08:30:57 addons-758852 kubelet[1286]: I1108 08:30:57.934397    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="gcp-auth/gcp-auth-78565c9fb4-99tsv" podStartSLOduration=66.174525709 podStartE2EDuration="1m8.934379136s" podCreationTimestamp="2025-11-08 08:29:49 +0000 UTC" firstStartedPulling="2025-11-08 08:30:54.427752854 +0000 UTC m=+78.897990277" lastFinishedPulling="2025-11-08 08:30:57.187606276 +0000 UTC m=+81.657843704" observedRunningTime="2025-11-08 08:30:57.933713177 +0000 UTC m=+82.403950620" watchObservedRunningTime="2025-11-08 08:30:57.934379136 +0000 UTC m=+82.404616580"
	Nov 08 08:30:58 addons-758852 kubelet[1286]: I1108 08:30:58.940554    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/csi-hostpathplugin-rtgg7" podStartSLOduration=1.628083251 podStartE2EDuration="36.940534201s" podCreationTimestamp="2025-11-08 08:30:22 +0000 UTC" firstStartedPulling="2025-11-08 08:30:22.692598627 +0000 UTC m=+47.162836061" lastFinishedPulling="2025-11-08 08:30:58.00504959 +0000 UTC m=+82.475287011" observedRunningTime="2025-11-08 08:30:58.940238485 +0000 UTC m=+83.410475928" watchObservedRunningTime="2025-11-08 08:30:58.940534201 +0000 UTC m=+83.410771643"
	Nov 08 08:31:01 addons-758852 kubelet[1286]: I1108 08:31:01.487967    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/850742cc-4864-4985-838b-99ba86e8a88f-gcp-creds\") pod \"busybox\" (UID: \"850742cc-4864-4985-838b-99ba86e8a88f\") " pod="default/busybox"
	Nov 08 08:31:01 addons-758852 kubelet[1286]: I1108 08:31:01.488013    1286 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bkrz\" (UniqueName: \"kubernetes.io/projected/850742cc-4864-4985-838b-99ba86e8a88f-kube-api-access-6bkrz\") pod \"busybox\" (UID: \"850742cc-4864-4985-838b-99ba86e8a88f\") " pod="default/busybox"
	Nov 08 08:31:03 addons-758852 kubelet[1286]: I1108 08:31:03.967634    1286 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.6323968450000002 podStartE2EDuration="2.96761294s" podCreationTimestamp="2025-11-08 08:31:01 +0000 UTC" firstStartedPulling="2025-11-08 08:31:01.736525873 +0000 UTC m=+86.206763294" lastFinishedPulling="2025-11-08 08:31:03.071741968 +0000 UTC m=+87.541979389" observedRunningTime="2025-11-08 08:31:03.96660988 +0000 UTC m=+88.436847322" watchObservedRunningTime="2025-11-08 08:31:03.96761294 +0000 UTC m=+88.437850382"
	
	
	==> storage-provisioner [76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f] <==
	W1108 08:30:46.830802       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:48.834052       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:48.837981       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:50.840326       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:50.843829       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:52.847315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:52.852870       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:54.856337       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:54.860437       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:56.872969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:56.876769       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:58.879092       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:30:58.882571       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:00.885505       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:00.890142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:02.893376       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:02.916124       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:04.918823       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:04.922621       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:06.926260       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:06.931384       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:08.934411       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:08.937901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:10.940181       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:31:10.945380       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-758852 -n addons-758852
helpers_test.go:269: (dbg) Run:  kubectl --context addons-758852 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: gcp-auth-certs-create-fkkhc gcp-auth-certs-patch-58q98 ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt registry-creds-764b6fb674-rjbxd
helpers_test.go:282: ======> post-mortem[TestAddons/parallel/Headlamp]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context addons-758852 describe pod gcp-auth-certs-create-fkkhc gcp-auth-certs-patch-58q98 ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt registry-creds-764b6fb674-rjbxd
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context addons-758852 describe pod gcp-auth-certs-create-fkkhc gcp-auth-certs-patch-58q98 ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt registry-creds-764b6fb674-rjbxd: exit status 1 (70.261227ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "gcp-auth-certs-create-fkkhc" not found
	Error from server (NotFound): pods "gcp-auth-certs-patch-58q98" not found
	Error from server (NotFound): pods "ingress-nginx-admission-create-t2bkq" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-49bbt" not found
	Error from server (NotFound): pods "registry-creds-764b6fb674-rjbxd" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context addons-758852 describe pod gcp-auth-certs-create-fkkhc gcp-auth-certs-patch-58q98 ingress-nginx-admission-create-t2bkq ingress-nginx-admission-patch-49bbt registry-creds-764b6fb674-rjbxd: exit status 1
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable headlamp --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable headlamp --alsologtostderr -v=1: exit status 11 (235.20156ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:12.231476   19688 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:12.231760   19688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:12.231770   19688 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:12.231774   19688 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:12.231943   19688 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:12.232180   19688 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:12.232515   19688 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:12.232530   19688 addons.go:607] checking whether the cluster is paused
	I1108 08:31:12.232606   19688 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:12.232617   19688 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:12.232969   19688 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:12.250902   19688 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:12.250955   19688 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:12.268716   19688 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:12.360735   19688 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:12.360796   19688 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:12.388866   19688 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:12.388897   19688 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:12.388901   19688 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:12.388905   19688 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:12.388908   19688 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:12.388911   19688 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:12.388914   19688 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:12.388916   19688 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:12.388918   19688 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:12.388930   19688 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:12.388933   19688 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:12.388935   19688 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:12.388938   19688 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:12.388941   19688 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:12.388943   19688 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:12.388953   19688 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:12.388960   19688 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:12.388964   19688 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:12.388966   19688 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:12.388968   19688 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:12.388971   19688 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:12.388973   19688 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:12.388975   19688 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:12.388977   19688 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:12.388980   19688 cri.go:89] found id: ""
	I1108 08:31:12.389024   19688 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:12.402926   19688 out.go:203] 
	W1108 08:31:12.404172   19688 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:12Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:12.404190   19688 out.go:285] * 
	* 
	W1108 08:31:12.407256   19688 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_efe3f0a65eabdab15324ffdebd5a66da17706a9c_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:12.408596   19688 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable headlamp addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable headlamp --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Headlamp (2.48s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:352: "cloud-spanner-emulator-6f9fcf858b-j98cr" [f5437096-9351-42eb-bcc5-7ebb1b4f7bfc] Running
addons_test.go:840: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003043966s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable cloud-spanner --alsologtostderr -v=1: exit status 11 (248.932461ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:20.252433   21081 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:20.252714   21081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:20.252726   21081 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:20.252744   21081 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:20.252949   21081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:20.253194   21081 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:20.253549   21081 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:20.253565   21081 addons.go:607] checking whether the cluster is paused
	I1108 08:31:20.253647   21081 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:20.253658   21081 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:20.253999   21081 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:20.273998   21081 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:20.274058   21081 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:20.292148   21081 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:20.388314   21081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:20.388398   21081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:20.420700   21081 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:20.420741   21081 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:20.420748   21081 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:20.420754   21081 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:20.420759   21081 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:20.420765   21081 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:20.420769   21081 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:20.420773   21081 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:20.420775   21081 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:20.420786   21081 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:20.420794   21081 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:20.420798   21081 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:20.420805   21081 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:20.420809   21081 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:20.420817   21081 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:20.420833   21081 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:20.420841   21081 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:20.420845   21081 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:20.420848   21081 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:20.420850   21081 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:20.420852   21081 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:20.420855   21081 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:20.420858   21081 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:20.420860   21081 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:20.420863   21081 cri.go:89] found id: ""
	I1108 08:31:20.420915   21081 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:20.435083   21081 out.go:203] 
	W1108 08:31:20.436511   21081 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:20.436534   21081 out.go:285] * 
	* 
	W1108 08:31:20.440270   21081 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e93ff976b7e98e1dc466aded9385c0856b6d1b41_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:20.441955   21081 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable cloud-spanner addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable cloud-spanner --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/CloudSpanner (5.26s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (8.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:949: (dbg) Run:  kubectl --context addons-758852 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:955: (dbg) Run:  kubectl --context addons-758852 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:959: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:402: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:352: "test-local-path" [2e93b029-dc26-4650-8bab-e7762e1e65c5] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "test-local-path" [2e93b029-dc26-4650-8bab-e7762e1e65c5] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "test-local-path" [2e93b029-dc26-4650-8bab-e7762e1e65c5] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:962: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.002635648s
addons_test.go:967: (dbg) Run:  kubectl --context addons-758852 get pvc test-pvc -o=json
addons_test.go:976: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 ssh "cat /opt/local-path-provisioner/pvc-79233732-933d-46d0-b689-a8767082a39b_default_test-pvc/file1"
addons_test.go:988: (dbg) Run:  kubectl --context addons-758852 delete pod test-local-path
addons_test.go:992: (dbg) Run:  kubectl --context addons-758852 delete pvc test-pvc
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable storage-provisioner-rancher --alsologtostderr -v=1: exit status 11 (249.858731ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:20.324668   21102 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:20.324948   21102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:20.324958   21102 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:20.324962   21102 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:20.325154   21102 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:20.325440   21102 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:20.325766   21102 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:20.325780   21102 addons.go:607] checking whether the cluster is paused
	I1108 08:31:20.325857   21102 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:20.325868   21102 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:20.326205   21102 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:20.345105   21102 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:20.345160   21102 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:20.363086   21102 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:20.459795   21102 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:20.459892   21102 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:20.489461   21102 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:20.489481   21102 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:20.489486   21102 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:20.489490   21102 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:20.489494   21102 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:20.489499   21102 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:20.489504   21102 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:20.489507   21102 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:20.489511   21102 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:20.489539   21102 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:20.489544   21102 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:20.489549   21102 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:20.489556   21102 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:20.489561   21102 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:20.489569   21102 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:20.489576   21102 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:20.489584   21102 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:20.489590   21102 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:20.489593   21102 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:20.489597   21102 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:20.489600   21102 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:20.489604   21102 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:20.489607   21102 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:20.489611   21102 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:20.489614   21102 cri.go:89] found id: ""
	I1108 08:31:20.489659   21102 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:20.505017   21102 out.go:203] 
	W1108 08:31:20.506712   21102 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:20Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:20.506732   21102 out.go:285] * 
	* 
	W1108 08:31:20.509887   21102 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_e8b2053d4ef30ba659303f708d034237180eb1ed_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:20.511655   21102 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable storage-provisioner-rancher addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable storage-provisioner-rancher --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/LocalPath (8.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:352: "nvidia-device-plugin-daemonset-tzbp6" [d24597ce-bcff-4de2-b1c6-a98409e3d114] Running
addons_test.go:1025: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.003344952s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable nvidia-device-plugin --alsologtostderr -v=1: exit status 11 (255.333587ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:14.991464   19851 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:14.991764   19851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:14.991774   19851 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:14.991779   19851 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:14.992015   19851 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:14.992334   19851 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:14.992745   19851 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:14.992761   19851 addons.go:607] checking whether the cluster is paused
	I1108 08:31:14.992887   19851 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:14.992938   19851 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:14.993501   19851 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:15.016366   19851 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:15.016421   19851 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:15.043318   19851 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:15.137536   19851 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:15.137636   19851 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:15.166135   19851 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:15.166159   19851 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:15.166165   19851 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:15.166170   19851 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:15.166175   19851 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:15.166181   19851 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:15.166185   19851 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:15.166190   19851 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:15.166194   19851 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:15.166201   19851 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:15.166206   19851 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:15.166211   19851 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:15.166220   19851 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:15.166225   19851 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:15.166229   19851 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:15.166242   19851 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:15.166247   19851 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:15.166251   19851 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:15.166254   19851 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:15.166256   19851 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:15.166259   19851 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:15.166261   19851 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:15.166264   19851 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:15.166266   19851 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:15.166268   19851 cri.go:89] found id: ""
	I1108 08:31:15.166332   19851 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:15.180137   19851 out.go:203] 
	W1108 08:31:15.181369   19851 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:15Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:15.181387   19851 out.go:285] * 
	* 
	W1108 08:31:15.184261   19851 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_47e1a72799625313bd916979b0f8aa84efd54736_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:15.185618   19851 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable nvidia-device-plugin addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable nvidia-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/NvidiaDevicePlugin (5.26s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.24s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:352: "yakd-dashboard-5ff678cb9-v2brq" [fa904f28-2e45-4bea-8f89-8c0d7e27d797] Running
addons_test.go:1047: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.003689874s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable yakd --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable yakd --alsologtostderr -v=1: exit status 11 (235.822401ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:31.757495   22177 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:31.757640   22177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:31.757650   22177 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:31.757655   22177 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:31.758118   22177 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:31.758592   22177 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:31.759329   22177 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:31.759364   22177 addons.go:607] checking whether the cluster is paused
	I1108 08:31:31.759468   22177 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:31.759480   22177 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:31.759833   22177 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:31.777506   22177 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:31.777552   22177 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:31.794641   22177 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:31.886765   22177 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:31.886841   22177 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:31.915480   22177 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:31.915506   22177 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:31.915525   22177 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:31.915533   22177 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:31.915538   22177 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:31.915543   22177 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:31.915547   22177 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:31.915552   22177 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:31.915556   22177 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:31.915564   22177 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:31.915572   22177 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:31.915576   22177 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:31.915580   22177 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:31.915584   22177 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:31.915587   22177 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:31.915596   22177 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:31.915601   22177 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:31.915605   22177 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:31.915608   22177 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:31.915610   22177 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:31.915620   22177 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:31.915623   22177 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:31.915625   22177 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:31.915630   22177 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:31.915632   22177 cri.go:89] found id: ""
	I1108 08:31:31.915674   22177 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:31.930215   22177 out.go:203] 
	W1108 08:31:31.931699   22177 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:31Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:31.931722   22177 out.go:285] * 
	* 
	W1108 08:31:31.935371   22177 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_82e5d844def28f20a5cac88dc27578ab5d1e7e1a_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:31.936835   22177 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable yakd addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable yakd --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/Yakd (6.24s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: waiting 6m0s for pods matching "name=amd-gpu-device-plugin" in namespace "kube-system" ...
helpers_test.go:352: "amd-gpu-device-plugin-fgsj6" [13feceae-52dd-4251-94a9-552b73a9c34f] Running
addons_test.go:1038: (dbg) TestAddons/parallel/AmdGpuDevicePlugin: name=amd-gpu-device-plugin healthy within 5.003773896s
addons_test.go:1053: (dbg) Run:  out/minikube-linux-amd64 -p addons-758852 addons disable amd-gpu-device-plugin --alsologtostderr -v=1
addons_test.go:1053: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-758852 addons disable amd-gpu-device-plugin --alsologtostderr -v=1: exit status 11 (276.782412ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:31:28.554707   21876 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:31:28.554951   21876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:28.554960   21876 out.go:374] Setting ErrFile to fd 2...
	I1108 08:31:28.554964   21876 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:31:28.555174   21876 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:31:28.555493   21876 mustload.go:66] Loading cluster: addons-758852
	I1108 08:31:28.555802   21876 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:28.555816   21876 addons.go:607] checking whether the cluster is paused
	I1108 08:31:28.555894   21876 config.go:182] Loaded profile config "addons-758852": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:31:28.555904   21876 host.go:66] Checking if "addons-758852" exists ...
	I1108 08:31:28.556241   21876 cli_runner.go:164] Run: docker container inspect addons-758852 --format={{.State.Status}}
	I1108 08:31:28.576729   21876 ssh_runner.go:195] Run: systemctl --version
	I1108 08:31:28.576899   21876 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-758852
	I1108 08:31:28.596663   21876 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/addons-758852/id_rsa Username:docker}
	I1108 08:31:28.696019   21876 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 08:31:28.696100   21876 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 08:31:28.733457   21876 cri.go:89] found id: "f34be8782c294c913b2d5d008f2fdf18e4f0593f9b76abde14a47b22eb4aa41a"
	I1108 08:31:28.733489   21876 cri.go:89] found id: "66198912dbb4cb9ffc8076a13f21084904a11dfbc0ac681b3e9bbe8d4c93d1a5"
	I1108 08:31:28.733495   21876 cri.go:89] found id: "ef0ec581e5d7103ded1b4b1513da33eff74920d9cb2c354617788733b3c6bdc1"
	I1108 08:31:28.733499   21876 cri.go:89] found id: "83841cdc496619dc65ca2a241f77cd32c4c0eac3b3c3ef199c4162bdd80bc48c"
	I1108 08:31:28.733503   21876 cri.go:89] found id: "f340f0145eb9bd395146f2486f91542b52c5a93bcad453c4ec68c02452edf5a5"
	I1108 08:31:28.733509   21876 cri.go:89] found id: "10f4c3a3e25586568e985cfd61e97d7c831e417ce729945dddfc8b5f4065791b"
	I1108 08:31:28.733514   21876 cri.go:89] found id: "db7058dc33833137110d507bfa442912827a4eded530dbd05fdfe2a65e415940"
	I1108 08:31:28.733518   21876 cri.go:89] found id: "1aaad9983441a3d1ff6f4b47c297111f159d42473cb477cdfe56d227b074a028"
	I1108 08:31:28.733522   21876 cri.go:89] found id: "8aabc952ff6865ec44b44077f31baed544e94503003d5a3b9f9503d295a705e4"
	I1108 08:31:28.733531   21876 cri.go:89] found id: "07cf5a2c38f59f9a00b93578f6faabaad5eeb2302b2292f7ee6f006ba6a432f4"
	I1108 08:31:28.733539   21876 cri.go:89] found id: "db1083da29dcefd63c34023fc2e36acd1af65120c0ddf2e169c040c3db8390ad"
	I1108 08:31:28.733544   21876 cri.go:89] found id: "b2bfae5b5011c7fb79eb6dde36a23484c59b842edb617a322e53b3c5e97b7cf2"
	I1108 08:31:28.733552   21876 cri.go:89] found id: "88464ad8c8a6f20e668d90dda3483c94750389e41c8f17a8d7b27afa4bc84611"
	I1108 08:31:28.733556   21876 cri.go:89] found id: "6df5c42a3809d752c316ec6cc3e3718d47c7a07ce1d01db6cdfb1641d34a0d74"
	I1108 08:31:28.733565   21876 cri.go:89] found id: "f8285831ae530d18e59acdc3e4ba4f88cad2a902a44a2b9dad33c37a64134615"
	I1108 08:31:28.733573   21876 cri.go:89] found id: "af0574068f104640a2fae0563418670001528edc55fd0bbbd99f771f824b1a84"
	I1108 08:31:28.733581   21876 cri.go:89] found id: "a616ef69289722db7a16b5aca03c5a2ed37e9d14c67a7b55d86b732a1dc55f7c"
	I1108 08:31:28.733586   21876 cri.go:89] found id: "76b41f4794cf946daa0b047caf646acb173f9be5119ce9aa7c51caaf2723ba3f"
	I1108 08:31:28.733589   21876 cri.go:89] found id: "10b7c804477d9ad55e837192498c7b9a2c973ac00df4eeec8b36b20d171bc3d3"
	I1108 08:31:28.733592   21876 cri.go:89] found id: "f2b09aff0e5539327c9e2893545adaf1d67eb47f05bac568a5245ede1988b968"
	I1108 08:31:28.733596   21876 cri.go:89] found id: "e08d383ff67051d312578d76b94c4be533bb68b3b1ac100f4cfd7f0d3411af34"
	I1108 08:31:28.733601   21876 cri.go:89] found id: "8e136e1e55dbabab3d1c70777ad834664370abfaffd0855d8f28d784742f6ff6"
	I1108 08:31:28.733608   21876 cri.go:89] found id: "ee1613ab5f8f087ebdbcb5a0c555c027191a85e3dad5793426f9cc10e4fe5268"
	I1108 08:31:28.733612   21876 cri.go:89] found id: "61e01b287696cf381127e59ef26736c435f38c6f0ff29cb185e42e209dbd3792"
	I1108 08:31:28.733621   21876 cri.go:89] found id: ""
	I1108 08:31:28.733673   21876 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 08:31:28.751555   21876 out.go:203] 
	W1108 08:31:28.752862   21876 out.go:285] X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:31:28Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 08:31:28.752884   21876 out.go:285] * 
	* 
	W1108 08:31:28.757748   21876 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_d91df5e23a6c7812cf3b3b0d72c142ff742a541e_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 08:31:28.759340   21876 out.go:203] 

                                                
                                                
** /stderr **
addons_test.go:1055: failed to disable amd-gpu-device-plugin addon: args "out/minikube-linux-amd64 -p addons-758852 addons disable amd-gpu-device-plugin --alsologtostderr -v=1": exit status 11
--- FAIL: TestAddons/parallel/AmdGpuDevicePlugin (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1636: (dbg) Run:  kubectl --context functional-096647 create deployment hello-node-connect --image kicbase/echo-server
functional_test.go:1640: (dbg) Run:  kubectl --context functional-096647 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1645: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:352: "hello-node-connect-7d85dfc575-bg4b8" [783b68d0-a5cd-475b-b98a-fe4442115382] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmdConnect: WARNING: pod list for "default" "app=hello-node-connect" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1645: ***** TestFunctional/parallel/ServiceCmdConnect: pod "app=hello-node-connect" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-096647 -n functional-096647
functional_test.go:1645: TestFunctional/parallel/ServiceCmdConnect: showing logs for failed pods as of 2025-11-08 08:46:45.250475018 +0000 UTC m=+1079.736678056
functional_test.go:1645: (dbg) Run:  kubectl --context functional-096647 describe po hello-node-connect-7d85dfc575-bg4b8 -n default
functional_test.go:1645: (dbg) kubectl --context functional-096647 describe po hello-node-connect-7d85dfc575-bg4b8 -n default:
Name:             hello-node-connect-7d85dfc575-bg4b8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-096647/192.168.49.2
Start Time:       Sat, 08 Nov 2025 08:36:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7c9ms (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7c9ms:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bg4b8 to functional-096647
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
functional_test.go:1645: (dbg) Run:  kubectl --context functional-096647 logs hello-node-connect-7d85dfc575-bg4b8 -n default
functional_test.go:1645: (dbg) Non-zero exit: kubectl --context functional-096647 logs hello-node-connect-7d85dfc575-bg4b8 -n default: exit status 1 (66.102617ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bg4b8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1645: kubectl --context functional-096647 logs hello-node-connect-7d85dfc575-bg4b8 -n default: exit status 1
functional_test.go:1646: failed waiting for hello-node pod: app=hello-node-connect within 10m0s: context deadline exceeded
functional_test.go:1608: service test failed - dumping debug information
functional_test.go:1609: -----------------------service failure post-mortem--------------------------------
functional_test.go:1612: (dbg) Run:  kubectl --context functional-096647 describe po hello-node-connect
functional_test.go:1616: hello-node pod describe:
Name:             hello-node-connect-7d85dfc575-bg4b8
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-096647/192.168.49.2
Start Time:       Sat, 08 Nov 2025 08:36:44 +0000
Labels:           app=hello-node-connect
pod-template-hash=7d85dfc575
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:           10.244.0.5
Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7c9ms (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-7c9ms:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bg4b8 to functional-096647
Normal   Pulling    6m56s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m56s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m56s (x5 over 10m)   kubelet            Error: ErrImagePull
Warning  Failed     5m (x20 over 10m)     kubelet            Error: ImagePullBackOff
Normal   BackOff    4m48s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"

                                                
                                                
functional_test.go:1618: (dbg) Run:  kubectl --context functional-096647 logs -l app=hello-node-connect
functional_test.go:1618: (dbg) Non-zero exit: kubectl --context functional-096647 logs -l app=hello-node-connect: exit status 1 (59.786802ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-connect-7d85dfc575-bg4b8" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1620: "kubectl --context functional-096647 logs -l app=hello-node-connect" failed: exit status 1
functional_test.go:1622: hello-node logs:
functional_test.go:1624: (dbg) Run:  kubectl --context functional-096647 describe svc hello-node-connect
functional_test.go:1628: hello-node svc describe:
Name:                     hello-node-connect
Namespace:                default
Labels:                   app=hello-node-connect
Annotations:              <none>
Selector:                 app=hello-node-connect
Type:                     NodePort
IP Family Policy:         SingleStack
IP Families:              IPv4
IP:                       10.101.154.215
IPs:                      10.101.154.215
Port:                     <unset>  8080/TCP
TargetPort:               8080/TCP
NodePort:                 <unset>  31302/TCP
Endpoints:                
Session Affinity:         None
External Traffic Policy:  Cluster
Internal Traffic Policy:  Cluster
Events:                   <none>
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect functional-096647
helpers_test.go:243: (dbg) docker inspect functional-096647:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec",
	        "Created": "2025-11-08T08:35:06.286829677Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 33022,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T08:35:06.321579613Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec/hostname",
	        "HostsPath": "/var/lib/docker/containers/9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec/hosts",
	        "LogPath": "/var/lib/docker/containers/9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec/9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec-json.log",
	        "Name": "/functional-096647",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "functional-096647:/var",
	                "/lib/modules:/lib/modules:ro"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "functional-096647",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4294967296,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8589934592,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "9313b9a2224ea839e6fda0b857f7b6908dd05c1bdc07b392d4558af0724678ec",
	                "LowerDir": "/var/lib/docker/overlay2/f22ad49ede0f91b7ebafe968997cfe6c45a472d8a81548810e476debb682eed8-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/f22ad49ede0f91b7ebafe968997cfe6c45a472d8a81548810e476debb682eed8/merged",
	                "UpperDir": "/var/lib/docker/overlay2/f22ad49ede0f91b7ebafe968997cfe6c45a472d8a81548810e476debb682eed8/diff",
	                "WorkDir": "/var/lib/docker/overlay2/f22ad49ede0f91b7ebafe968997cfe6c45a472d8a81548810e476debb682eed8/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "volume",
	                "Name": "functional-096647",
	                "Source": "/var/lib/docker/volumes/functional-096647/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            },
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-096647",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-096647",
	                "name.minikube.sigs.k8s.io": "functional-096647",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "149e544672987a772c549b7dcf97f863868062f96e489848bd92a55419e996bc",
	            "SandboxKey": "/var/run/docker/netns/149e54467298",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32778"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32779"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32782"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32780"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32781"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-096647": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "72:6e:9d:a9:8e:09",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "5865a1fafc8928f9aebd2ca4aee9231277f0b181c0a5808cbc478e221c7559da",
	                    "EndpointID": "fbc79bcd458dd491bd20ea08f8506704bd6616cb6b44756e947891510993b6e1",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-096647",
	                        "9313b9a2224e"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p functional-096647 -n functional-096647
helpers_test.go:252: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestFunctional/parallel/ServiceCmdConnect]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 logs -n 25: (1.231355319s)
helpers_test.go:260: TestFunctional/parallel/ServiceCmdConnect logs: 
-- stdout --
	
	==> Audit <==
	┌────────────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│    COMMAND     │                                                               ARGS                                                                │      PROFILE      │  USER   │ VERSION │     START TIME      │      END TIME       │
	├────────────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ mount          │ -p functional-096647 /tmp/TestFunctionalparallelMountCmdspecific-port1953486497/001:/mount-9p --alsologtostderr -v=1 --port 46464 │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ ssh            │ functional-096647 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ ssh            │ functional-096647 ssh findmnt -T /mount-9p | grep 9p                                                                              │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh -- ls -la /mount-9p                                                                                         │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh sudo umount -f /mount-9p                                                                                    │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ mount          │ -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount3 --alsologtostderr -v=1                 │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ ssh            │ functional-096647 ssh findmnt -T /mount1                                                                                          │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ mount          │ -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount2 --alsologtostderr -v=1                 │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ mount          │ -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount1 --alsologtostderr -v=1                 │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ ssh            │ functional-096647 ssh findmnt -T /mount1                                                                                          │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh findmnt -T /mount2                                                                                          │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh findmnt -T /mount3                                                                                          │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ mount          │ -p functional-096647 --kill=true                                                                                                  │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ dashboard      │ --url --port 36195 -p functional-096647 --alsologtostderr -v=1                                                                    │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh sudo cat /etc/test/nested/copy/9369/hosts                                                                   │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ update-context │ functional-096647 update-context --alsologtostderr -v=2                                                                           │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ update-context │ functional-096647 update-context --alsologtostderr -v=2                                                                           │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ update-context │ functional-096647 update-context --alsologtostderr -v=2                                                                           │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ image          │ functional-096647 image ls --format short --alsologtostderr                                                                       │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ image          │ functional-096647 image ls --format json --alsologtostderr                                                                        │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ ssh            │ functional-096647 ssh pgrep buildkitd                                                                                             │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │                     │
	│ image          │ functional-096647 image build -t localhost/my-image:functional-096647 testdata/build --alsologtostderr                            │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ image          │ functional-096647 image ls                                                                                                        │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ image          │ functional-096647 image ls --format yaml --alsologtostderr                                                                        │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	│ image          │ functional-096647 image ls --format table --alsologtostderr                                                                       │ functional-096647 │ jenkins │ v1.37.0 │ 08 Nov 25 08:37 UTC │ 08 Nov 25 08:37 UTC │
	└────────────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:36:53
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:36:53.785050   45196 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:36:53.785167   45196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:53.785175   45196 out.go:374] Setting ErrFile to fd 2...
	I1108 08:36:53.785182   45196 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:53.785402   45196 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:36:53.785840   45196 out.go:368] Setting JSON to false
	I1108 08:36:53.786781   45196 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1165,"bootTime":1762589849,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:36:53.786881   45196 start.go:143] virtualization: kvm guest
	I1108 08:36:53.788841   45196 out.go:179] * [functional-096647] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:36:53.790135   45196 notify.go:221] Checking for updates...
	I1108 08:36:53.790174   45196 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:36:53.791563   45196 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:36:53.792876   45196 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:36:53.794191   45196 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:36:53.795483   45196 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:36:53.796785   45196 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:36:53.798558   45196 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:36:53.799046   45196 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:36:53.823134   45196 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:36:53.823228   45196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:36:53.879359   45196 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-08 08:36:53.869394755 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:36:53.879473   45196 docker.go:319] overlay module found
	I1108 08:36:53.881321   45196 out.go:179] * Using the docker driver based on existing profile
	I1108 08:36:53.882468   45196 start.go:309] selected driver: docker
	I1108 08:36:53.882483   45196 start.go:930] validating driver "docker" against &{Name:functional-096647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-096647 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:36:53.882563   45196 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:36:53.882643   45196 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:36:53.940217   45196 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-08 08:36:53.930704255 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:36:53.940887   45196 cni.go:84] Creating CNI manager for ""
	I1108 08:36:53.940942   45196 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 08:36:53.940992   45196 start.go:353] cluster config:
	{Name:functional-096647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-096647 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Di
sableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:36:53.942779   45196 out.go:179] * dry-run validation complete!
	
	
	==> CRI-O <==
	Nov 08 08:37:14 functional-096647 crio[3546]: time="2025-11-08T08:37:14.143541437Z" level=info msg="Starting container: 52c8612fce740eb73e5662168a81d0307da83d9f2a5f8e6c032b19babde34a8d" id=0d602f94-8267-45f4-9060-02d0e058eb69 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 08:37:14 functional-096647 crio[3546]: time="2025-11-08T08:37:14.145834428Z" level=info msg="Started container" PID=7402 containerID=52c8612fce740eb73e5662168a81d0307da83d9f2a5f8e6c032b19babde34a8d description=default/mysql-5bb876957f-bl48c/mysql id=0d602f94-8267-45f4-9060-02d0e058eb69 name=/runtime.v1.RuntimeService/StartContainer sandboxID=395b3a6340305996e2e0cb91149161c7546729984d38993b428ee89772fc3d8f
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.813776094Z" level=info msg="Stopping pod sandbox: 22d6712ada0a6ecd64d032e75875b8b0184b7f7f7d4584c2cdf7c6f8338a3830" id=a1b74d0b-118e-4920-a4e2-d407f08e4947 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.813840205Z" level=info msg="Stopped pod sandbox (already stopped): 22d6712ada0a6ecd64d032e75875b8b0184b7f7f7d4584c2cdf7c6f8338a3830" id=a1b74d0b-118e-4920-a4e2-d407f08e4947 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.814276021Z" level=info msg="Removing pod sandbox: 22d6712ada0a6ecd64d032e75875b8b0184b7f7f7d4584c2cdf7c6f8338a3830" id=91594329-9d92-4eda-a706-b8434313e140 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.852380516Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.852450475Z" level=info msg="Removed pod sandbox: 22d6712ada0a6ecd64d032e75875b8b0184b7f7f7d4584c2cdf7c6f8338a3830" id=91594329-9d92-4eda-a706-b8434313e140 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.852917793Z" level=info msg="Stopping pod sandbox: b8ce25fa50b458f0ac9f6ae24f29ee5c41e50019ac595a10a23fe19a78071bbb" id=99e33d78-bdd1-4862-9609-5b3b6736b086 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.852968168Z" level=info msg="Stopped pod sandbox (already stopped): b8ce25fa50b458f0ac9f6ae24f29ee5c41e50019ac595a10a23fe19a78071bbb" id=99e33d78-bdd1-4862-9609-5b3b6736b086 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.853643177Z" level=info msg="Removing pod sandbox: b8ce25fa50b458f0ac9f6ae24f29ee5c41e50019ac595a10a23fe19a78071bbb" id=7c99b9fb-c35f-4571-b455-8fc542b33882 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.881162987Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.881246274Z" level=info msg="Removed pod sandbox: b8ce25fa50b458f0ac9f6ae24f29ee5c41e50019ac595a10a23fe19a78071bbb" id=7c99b9fb-c35f-4571-b455-8fc542b33882 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.881919376Z" level=info msg="Stopping pod sandbox: 0e9c794a7cb10ad9fb29657c229609f10187e362fc29be74b96a7a0e67f4d531" id=89cba527-1563-4a67-8fde-24c50e225ac6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.881987247Z" level=info msg="Stopped pod sandbox (already stopped): 0e9c794a7cb10ad9fb29657c229609f10187e362fc29be74b96a7a0e67f4d531" id=89cba527-1563-4a67-8fde-24c50e225ac6 name=/runtime.v1.RuntimeService/StopPodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.882378342Z" level=info msg="Removing pod sandbox: 0e9c794a7cb10ad9fb29657c229609f10187e362fc29be74b96a7a0e67f4d531" id=7de12d54-a31e-4a8c-8696-ea5c4ff1c9eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.900434017Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 08:37:18 functional-096647 crio[3546]: time="2025-11-08T08:37:18.900507328Z" level=info msg="Removed pod sandbox: 0e9c794a7cb10ad9fb29657c229609f10187e362fc29be74b96a7a0e67f4d531" id=7de12d54-a31e-4a8c-8696-ea5c4ff1c9eb name=/runtime.v1.RuntimeService/RemovePodSandbox
	Nov 08 08:37:27 functional-096647 crio[3546]: time="2025-11-08T08:37:27.824259536Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=267cb686-bf71-4f27-8d8a-08a7512a23be name=/runtime.v1.ImageService/PullImage
	Nov 08 08:37:30 functional-096647 crio[3546]: time="2025-11-08T08:37:30.824572569Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=50362404-1929-4214-bdb4-aec6b62c402f name=/runtime.v1.ImageService/PullImage
	Nov 08 08:38:18 functional-096647 crio[3546]: time="2025-11-08T08:38:18.824466079Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=e2c89279-e4a7-4e67-a373-7b6a0d70a985 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:38:23 functional-096647 crio[3546]: time="2025-11-08T08:38:23.824487664Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=90d924ef-7dba-461e-9785-e7ffa1f53c09 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:39:49 functional-096647 crio[3546]: time="2025-11-08T08:39:49.824372675Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=61c3cafc-7766-4abf-8385-b83eb2fae286 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:39:50 functional-096647 crio[3546]: time="2025-11-08T08:39:50.824018029Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=23dd7c2a-b1ed-46f2-b631-9b816243b502 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:42:32 functional-096647 crio[3546]: time="2025-11-08T08:42:32.824897658Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4cf8f79b-8472-4231-9a39-3ae43d2f94b5 name=/runtime.v1.ImageService/PullImage
	Nov 08 08:42:38 functional-096647 crio[3546]: time="2025-11-08T08:42:38.82496597Z" level=info msg="Pulling image: kicbase/echo-server:latest" id=4883edbc-33d5-4c45-b738-6d0daf1187e4 name=/runtime.v1.ImageService/PullImage
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	52c8612fce740       docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da                  9 minutes ago       Running             mysql                       0                   395b3a6340305       mysql-5bb876957f-bl48c                       default
	dd634ab08cedf       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029         9 minutes ago       Running             kubernetes-dashboard        0                   07ed7cf7c92a6       kubernetes-dashboard-855c9754f9-s68ff        kubernetes-dashboard
	5f73caf0a57a2       docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a   9 minutes ago       Running             dashboard-metrics-scraper   0                   d560f4ef6be03       dashboard-metrics-scraper-77bf4d6c4c-v5rxl   kubernetes-dashboard
	b078ecb9b8243       docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b                  9 minutes ago       Running             myfrontend                  0                   f3258c18adb70       sp-pod                                       default
	c3ea446e303f4       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998              9 minutes ago       Exited              mount-munger                0                   267bd61354f55       busybox-mount                                default
	d3438eab0d9d9       docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7                  10 minutes ago      Running             nginx                       0                   0f8de40faa5fb       nginx-svc                                    default
	bbdec9097d679       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                                 10 minutes ago      Running             kube-apiserver              0                   79356422275a2       kube-apiserver-functional-096647             kube-system
	a3eb7f1012987       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Running             kube-controller-manager     2                   438829c3c474b       kube-controller-manager-functional-096647    kube-system
	fff7042b7c4c7       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 10 minutes ago      Running             kube-scheduler              1                   eec96997ce613       kube-scheduler-functional-096647             kube-system
	ec17327aa4050       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 10 minutes ago      Running             etcd                        1                   255ef063ff496       etcd-functional-096647                       kube-system
	5372e8aed7fa4       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                                 10 minutes ago      Exited              kube-controller-manager     1                   438829c3c474b       kube-controller-manager-functional-096647    kube-system
	741fd4f879f8c       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 10 minutes ago      Running             coredns                     1                   e4585bf1d8c0d       coredns-66bc5c9577-gv77q                     kube-system
	a02d69c9dc687       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 10 minutes ago      Running             storage-provisioner         1                   ee1b357c94457       storage-provisioner                          kube-system
	ae847045ca632       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 10 minutes ago      Running             kindnet-cni                 1                   be29e411bf48a       kindnet-k46tr                                kube-system
	8e848dcb27ad4       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 10 minutes ago      Running             kube-proxy                  1                   c0e1d5cb11439       kube-proxy-5hh2c                             kube-system
	02c077d91b8c3       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                                 11 minutes ago      Exited              coredns                     0                   e4585bf1d8c0d       coredns-66bc5c9577-gv77q                     kube-system
	019c2068c6b33       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                 11 minutes ago      Exited              storage-provisioner         0                   ee1b357c94457       storage-provisioner                          kube-system
	a128f8e52bc3d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                                 11 minutes ago      Exited              kube-proxy                  0                   c0e1d5cb11439       kube-proxy-5hh2c                             kube-system
	958cf6e9f5cb3       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                                 11 minutes ago      Exited              kindnet-cni                 0                   be29e411bf48a       kindnet-k46tr                                kube-system
	883405aaa95f9       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                                 11 minutes ago      Exited              etcd                        0                   255ef063ff496       etcd-functional-096647                       kube-system
	28102bfcd9ed9       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                                 11 minutes ago      Exited              kube-scheduler              0                   eec96997ce613       kube-scheduler-functional-096647             kube-system
	
	
	==> coredns [02c077d91b8c39dc466bb62a6ff6a7601ac55cb5d9d794719f6695d1cea4366d] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:59830 - 64274 "HINFO IN 5768577914831187417.4114170299917830688. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.425186203s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [741fd4f879f8c7e53c9e69666534ff92139c54e66aba8af594bac3ffbf42df46] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 9e2996f8cb67ac53e0259ab1f8d615d07d1beb0bd07e6a1e39769c3bf486a905bb991cc47f8d2f14d0d3a90a87dfc625a0b4c524fed169d8158c40657c0694b1
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50120 - 37182 "HINFO IN 1519359040663426370.2230355815858919605. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.419310831s
	
	
	==> describe nodes <==
	Name:               functional-096647
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=functional-096647
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=functional-096647
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T08_35_20_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 08:35:17 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-096647
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 08:46:42 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 08:44:41 +0000   Sat, 08 Nov 2025 08:35:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 08:44:41 +0000   Sat, 08 Nov 2025 08:35:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 08:44:41 +0000   Sat, 08 Nov 2025 08:35:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 08:44:41 +0000   Sat, 08 Nov 2025 08:35:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-096647
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fa5718f6-e7da-44e4-b6b1-4e99d208f713
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-75c85bcc94-6c97z                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m58s
	  default                     hello-node-connect-7d85dfc575-bg4b8           0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     mysql-5bb876957f-bl48c                        600m (7%)     700m (8%)   512Mi (1%)       700Mi (2%)     9m40s
	  default                     nginx-svc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         10m
	  default                     sp-pod                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m46s
	  kube-system                 coredns-66bc5c9577-gv77q                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-functional-096647                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         11m
	  kube-system                 kindnet-k46tr                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-functional-096647              250m (3%)     0 (0%)      0 (0%)           0 (0%)         10m
	  kube-system                 kube-controller-manager-functional-096647     200m (2%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-5hh2c                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-functional-096647              100m (1%)     0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kubernetes-dashboard        dashboard-metrics-scraper-77bf4d6c4c-v5rxl    0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-s68ff         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m41s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1450m (18%)  800m (10%)
	  memory             732Mi (2%)   920Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11m                kube-proxy       
	  Normal  Starting                 10m                kube-proxy       
	  Normal  NodeHasNoDiskPressure    11m (x8 over 11m)  kubelet          Node functional-096647 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  11m (x8 over 11m)  kubelet          Node functional-096647 status is now: NodeHasSufficientMemory
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     11m (x8 over 11m)  kubelet          Node functional-096647 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientPID     11m                kubelet          Node functional-096647 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  11m                kubelet          Node functional-096647 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    11m                kubelet          Node functional-096647 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 11m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           11m                node-controller  Node functional-096647 event: Registered Node functional-096647 in Controller
	  Normal  NodeReady                11m                kubelet          Node functional-096647 status is now: NodeReady
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node functional-096647 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node functional-096647 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m (x8 over 10m)  kubelet          Node functional-096647 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           10m                node-controller  Node functional-096647 event: Registered Node functional-096647 in Controller
	
	
	==> dmesg <==
	[  +0.084884] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.205659] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 8 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.054730] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +2.047820] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +4.031573] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +8.127109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[Nov 8 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	
	
	==> etcd [883405aaa95f9c133c900e49a810270fe8cbec53b7e1840c18c8f8b83992e9a1] <==
	{"level":"warn","ts":"2025-11-08T08:35:16.560910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.569121Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36156","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.577988Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36172","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.590615Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.596591Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.604248Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36222","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:35:16.646038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:36240","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T08:35:59.404812Z","caller":"osutil/interrupt_unix.go:65","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2025-11-08T08:35:59.404904Z","caller":"embed/etcd.go:426","msg":"closing etcd server","name":"functional-096647","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"error","ts":"2025-11-08T08:35:59.404995Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T08:36:06.406713Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"http: Server closed","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*serveCtx).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/serve.go:90"}
	{"level":"error","ts":"2025-11-08T08:36:06.406818Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2381: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T08:36:06.406844Z","caller":"etcdserver/server.go:1281","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2025-11-08T08:36:06.406936Z","caller":"etcdserver/server.go:2342","msg":"server has stopped; stopping storage version's monitor"}
	{"level":"warn","ts":"2025-11-08T08:36:06.406918Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T08:36:06.408296Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T08:36:06.408326Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T08:36:06.406957Z","caller":"etcdserver/server.go:2319","msg":"server has stopped; stopping cluster version's monitor"}
	{"level":"warn","ts":"2025-11-08T08:36:06.408351Z","caller":"embed/serve.go:245","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2025-11-08T08:36:06.408377Z","caller":"embed/serve.go:247","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"error","ts":"2025-11-08T08:36:06.408387Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 127.0.0.1:2379: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T08:36:06.410602Z","caller":"embed/etcd.go:621","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"error","ts":"2025-11-08T08:36:06.410661Z","caller":"embed/etcd.go:912","msg":"setting up serving from embedded etcd failed.","error":"accept tcp 192.168.49.2:2380: use of closed network connection","stacktrace":"go.etcd.io/etcd/server/v3/embed.(*Etcd).errHandler\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:912\ngo.etcd.io/etcd/server/v3/embed.(*Etcd).startHandler.func1\n\tgo.etcd.io/etcd/server/v3/embed/etcd.go:906"}
	{"level":"info","ts":"2025-11-08T08:36:06.410694Z","caller":"embed/etcd.go:626","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2025-11-08T08:36:06.410704Z","caller":"embed/etcd.go:428","msg":"closed etcd server","name":"functional-096647","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> etcd [ec17327aa4050c6d94a392a572a7829d506df088d3c1ddd0ced492e9f9fbbc14] <==
	{"level":"warn","ts":"2025-11-08T08:36:20.188168Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56322","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.194622Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56340","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.201532Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56358","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.208197Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56378","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.214399Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.220441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56412","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.226331Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56424","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.232441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56432","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.239327Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56454","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.245479Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.251632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56482","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.257956Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.263861Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.278992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56512","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.285377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56544","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:36:20.291290Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56556","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T08:37:15.163536Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"229.272566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T08:37:15.163642Z","caller":"traceutil/trace.go:172","msg":"trace[1138084752] range","detail":"{range_begin:/registry/pods; range_end:; response_count:0; response_revision:885; }","duration":"229.393957ms","start":"2025-11-08T08:37:14.934234Z","end":"2025-11-08T08:37:15.163628Z","steps":["trace[1138084752] 'range keys from in-memory index tree'  (duration: 229.185669ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T08:37:15.163652Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"154.053988ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2025-11-08T08:37:15.163695Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"131.98535ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/mysql-5bb876957f-bl48c\" limit:1 ","response":"range_response_count:1 size:3321"}
	{"level":"info","ts":"2025-11-08T08:37:15.163709Z","caller":"traceutil/trace.go:172","msg":"trace[1445076227] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:885; }","duration":"154.110769ms","start":"2025-11-08T08:37:15.009581Z","end":"2025-11-08T08:37:15.163692Z","steps":["trace[1445076227] 'range keys from in-memory index tree'  (duration: 153.972115ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:37:15.163733Z","caller":"traceutil/trace.go:172","msg":"trace[652585775] range","detail":"{range_begin:/registry/pods/default/mysql-5bb876957f-bl48c; range_end:; response_count:1; response_revision:885; }","duration":"132.025141ms","start":"2025-11-08T08:37:15.031696Z","end":"2025-11-08T08:37:15.163721Z","steps":["trace[652585775] 'range keys from in-memory index tree'  (duration: 131.823741ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T08:46:19.882097Z","caller":"mvcc/index.go:194","msg":"compact tree index","revision":1185}
	{"level":"info","ts":"2025-11-08T08:46:19.901992Z","caller":"mvcc/kvstore_compaction.go:70","msg":"finished scheduled compaction","compact-revision":1185,"took":"19.495915ms","hash":3073606413,"current-db-size-bytes":3514368,"current-db-size":"3.5 MB","current-db-size-in-use-bytes":1671168,"current-db-size-in-use":"1.7 MB"}
	{"level":"info","ts":"2025-11-08T08:46:19.902039Z","caller":"mvcc/hash.go:157","msg":"storing new hash","hash":3073606413,"revision":1185,"compact-revision":-1}
	
	
	==> kernel <==
	 08:46:46 up 29 min,  0 user,  load average: 0.25, 0.25, 0.27
	Linux functional-096647 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [958cf6e9f5cb393777e23771186524c529aeb8e54605c140593d996a747e3fde] <==
	I1108 08:35:25.720015       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 08:35:25.720310       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1108 08:35:25.720475       1 main.go:148] setting mtu 1500 for CNI 
	I1108 08:35:25.720493       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 08:35:25.720510       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T08:35:25Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 08:35:25.919754       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 08:35:25.920224       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 08:35:25.920242       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 08:35:25.920667       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 08:35:26.121181       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 08:35:26.121208       1 metrics.go:72] Registering metrics
	I1108 08:35:26.121323       1 controller.go:711] "Syncing nftables rules"
	I1108 08:35:35.921489       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:35:35.921583       1 main.go:301] handling current node
	I1108 08:35:45.925396       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:35:45.925425       1 main.go:301] handling current node
	I1108 08:35:55.923061       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:35:55.923099       1 main.go:301] handling current node
	
	
	==> kindnet [ae847045ca6328a6c26bd320690400a0dd99df51626802d808b7809c75303732] <==
	I1108 08:44:40.015409       1 main.go:301] handling current node
	I1108 08:44:50.015550       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:44:50.015584       1 main.go:301] handling current node
	I1108 08:45:00.023669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:00.023707       1 main.go:301] handling current node
	I1108 08:45:10.019200       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:10.019236       1 main.go:301] handling current node
	I1108 08:45:20.022696       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:20.022724       1 main.go:301] handling current node
	I1108 08:45:30.014760       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:30.014797       1 main.go:301] handling current node
	I1108 08:45:40.015364       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:40.015394       1 main.go:301] handling current node
	I1108 08:45:50.015998       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:45:50.016029       1 main.go:301] handling current node
	I1108 08:46:00.024157       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:46:00.024191       1 main.go:301] handling current node
	I1108 08:46:10.018326       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:46:10.018361       1 main.go:301] handling current node
	I1108 08:46:20.022572       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:46:20.022604       1 main.go:301] handling current node
	I1108 08:46:30.021656       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:46:30.021689       1 main.go:301] handling current node
	I1108 08:46:40.017884       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1108 08:46:40.017914       1 main.go:301] handling current node
	
	
	==> kube-apiserver [bbdec9097d6793de07d54db935ccc22e856168209445d0a7f957841efdc98496] <==
	I1108 08:36:20.838599       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 08:36:20.845742       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 08:36:21.689933       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W1108 08:36:21.896108       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.49.2]
	I1108 08:36:21.897220       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 08:36:21.901222       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 08:36:22.671901       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 08:36:22.874249       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 08:36:22.922004       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 08:36:22.927997       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 08:36:27.248469       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 08:36:38.302519       1 alloc.go:328] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.102.26.205"}
	I1108 08:36:43.906675       1 alloc.go:328] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.105.201.218"}
	I1108 08:36:44.926381       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.101.154.215"}
	I1108 08:36:48.084593       1 alloc.go:328] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.97.56.128"}
	E1108 08:36:59.293175       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:51792: use of closed network connection
	I1108 08:37:05.109713       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 08:37:05.227270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.102.69.116"}
	I1108 08:37:05.237270       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.111.117.251"}
	E1108 08:37:06.226579       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:37640: use of closed network connection
	I1108 08:37:06.413612       1 alloc.go:328] "allocated clusterIPs" service="default/mysql" clusterIPs={"IPv4":"10.104.93.214"}
	E1108 08:37:21.533169       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:49712: use of closed network connection
	E1108 08:37:22.624444       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47020: use of closed network connection
	E1108 08:37:24.146045       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8441->192.168.49.1:47044: use of closed network connection
	I1108 08:46:20.746814       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	
	
	==> kube-controller-manager [5372e8aed7fa43fdbdf865671be2957eed4f028e109265f0ac0593da56f63879] <==
	I1108 08:36:09.677101       1 controllermanager.go:781] "Started controller" controller="persistentvolume-attach-detach-controller"
	I1108 08:36:09.677247       1 attach_detach_controller.go:336] "Starting attach detach controller" logger="persistentvolume-attach-detach-controller"
	I1108 08:36:09.677262       1 shared_informer.go:349] "Waiting for caches to sync" controller="attach detach"
	I1108 08:36:09.726686       1 controllermanager.go:781] "Started controller" controller="persistentvolumeclaim-protection-controller"
	I1108 08:36:09.726708       1 controllermanager.go:733] "Controller is disabled by a feature gate" controller="kube-apiserver-serving-clustertrustbundle-publisher-controller" requiredFeatureGates=["ClusterTrustBundle"]
	I1108 08:36:09.726762       1 pvc_protection_controller.go:168] "Starting PVC protection controller" logger="persistentvolumeclaim-protection-controller"
	I1108 08:36:09.726769       1 shared_informer.go:349] "Waiting for caches to sync" controller="PVC protection"
	I1108 08:36:09.777935       1 controllermanager.go:781] "Started controller" controller="endpointslice-mirroring-controller"
	I1108 08:36:09.778071       1 endpointslicemirroring_controller.go:227] "Starting EndpointSliceMirroring controller" logger="endpointslice-mirroring-controller"
	I1108 08:36:09.778090       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice_mirroring"
	I1108 08:36:09.827012       1 controllermanager.go:781] "Started controller" controller="job-controller"
	I1108 08:36:09.827139       1 job_controller.go:257] "Starting job controller" logger="job-controller"
	I1108 08:36:09.827157       1 shared_informer.go:349] "Waiting for caches to sync" controller="job"
	I1108 08:36:09.876554       1 controllermanager.go:781] "Started controller" controller="cronjob-controller"
	I1108 08:36:09.876682       1 cronjob_controllerv2.go:145] "Starting cronjob controller v2" logger="cronjob-controller"
	I1108 08:36:09.876706       1 shared_informer.go:349] "Waiting for caches to sync" controller="cronjob"
	I1108 08:36:09.926934       1 controllermanager.go:781] "Started controller" controller="volumeattributesclass-protection-controller"
	I1108 08:36:09.926957       1 controllermanager.go:759] "Warning: skipping controller" controller="storage-version-migrator-controller"
	I1108 08:36:09.926964       1 controllermanager.go:744] "Warning: controller is disabled" controller="selinux-warning-controller"
	I1108 08:36:09.927002       1 vac_protection_controller.go:206] "Starting VAC protection controller" logger="volumeattributesclass-protection-controller"
	I1108 08:36:09.927009       1 shared_informer.go:349] "Waiting for caches to sync" controller="VAC protection"
	I1108 08:36:09.976971       1 controllermanager.go:781] "Started controller" controller="endpointslice-controller"
	I1108 08:36:09.977107       1 endpointslice_controller.go:281] "Starting endpoint slice controller" logger="endpointslice-controller"
	I1108 08:36:09.977123       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint_slice"
	F1108 08:36:10.024263       1 client_builder_dynamic.go:154] Get "https://192.168.49.2:8441/api/v1/namespaces/kube-system/serviceaccounts/service-account-controller": dial tcp 192.168.49.2:8441: connect: connection refused
	
	
	==> kube-controller-manager [a3eb7f101298779180d7cddeb052dcc306c259db4a553daab3d0ba915e12b2b4] <==
	I1108 08:36:24.067826       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 08:36:24.072061       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 08:36:24.073258       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 08:36:24.076659       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 08:36:24.078868       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 08:36:24.078886       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 08:36:24.078932       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 08:36:24.080144       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 08:36:24.104636       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 08:36:24.104649       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 08:36:24.104674       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 08:36:24.104897       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 08:36:24.105238       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 08:36:24.105311       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 08:36:24.106749       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 08:36:24.106802       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 08:36:24.109951       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:36:24.122119       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 08:36:24.127464       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	E1108 08:37:05.152959       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 08:37:05.156437       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 08:37:05.160040       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 08:37:05.161229       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-77bf4d6c4c\" failed with pods \"dashboard-metrics-scraper-77bf4d6c4c-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 08:37:05.163648       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	E1108 08:37:05.171512       1 replica_set.go:587] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-855c9754f9\" failed with pods \"kubernetes-dashboard-855c9754f9-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	
	
	==> kube-proxy [8e848dcb27ad4f554ef2e485fa8dc71ce0e6dfc410957d1a76495a009d65f95f] <==
	I1108 08:35:59.722921       1 server_linux.go:53] "Using iptables proxy"
	I1108 08:35:59.784962       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 08:35:59.885724       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 08:35:59.885768       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 08:35:59.885860       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 08:35:59.905256       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 08:35:59.905327       1 server_linux.go:132] "Using iptables Proxier"
	I1108 08:35:59.911074       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 08:35:59.911546       1 server.go:527] "Version info" version="v1.34.1"
	I1108 08:35:59.911575       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 08:35:59.912839       1 config.go:106] "Starting endpoint slice config controller"
	I1108 08:35:59.912897       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 08:35:59.912912       1 config.go:200] "Starting service config controller"
	I1108 08:35:59.912903       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 08:35:59.912917       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 08:35:59.912925       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 08:35:59.912974       1 config.go:309] "Starting node config controller"
	I1108 08:35:59.912982       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 08:36:00.013222       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 08:36:00.013237       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 08:36:00.013299       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 08:36:00.013340       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-proxy [a128f8e52bc3d5f4f9f3b12ffb260ae1221033a9bc2b19257b5cb3b05e180d58] <==
	I1108 08:35:25.580847       1 server_linux.go:53] "Using iptables proxy"
	I1108 08:35:25.654986       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 08:35:25.755152       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 08:35:25.755197       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.49.2"]
	E1108 08:35:25.755317       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 08:35:25.772565       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 08:35:25.772631       1 server_linux.go:132] "Using iptables Proxier"
	I1108 08:35:25.777756       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 08:35:25.778078       1 server.go:527] "Version info" version="v1.34.1"
	I1108 08:35:25.778103       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 08:35:25.779288       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 08:35:25.779300       1 config.go:200] "Starting service config controller"
	I1108 08:35:25.779305       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 08:35:25.779313       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 08:35:25.779337       1 config.go:106] "Starting endpoint slice config controller"
	I1108 08:35:25.779372       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 08:35:25.779579       1 config.go:309] "Starting node config controller"
	I1108 08:35:25.779647       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 08:35:25.779661       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 08:35:25.879394       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 08:35:25.879450       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 08:35:25.879470       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [28102bfcd9ed9d69d43f179e1cab4d1afe086cc847be0420f401b88b50bf4432] <==
	E1108 08:35:17.061526       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 08:35:17.061856       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:35:17.062230       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:35:17.062412       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 08:35:17.062500       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 08:35:17.884630       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 08:35:17.941167       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 08:35:17.974574       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:35:17.990891       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 08:35:18.001069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 08:35:18.011530       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:35:18.043612       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 08:35:18.056851       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 08:35:18.058751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 08:35:18.087069       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 08:35:18.146145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 08:35:18.189483       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:35:18.333144       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1108 08:35:20.858799       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 08:36:06.627244       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 08:36:06.627276       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1108 08:36:06.627494       1 secure_serving.go:259] Stopped listening on 127.0.0.1:10259
	I1108 08:36:06.627550       1 server.go:263] "[graceful-termination] secure server has stopped listening"
	I1108 08:36:06.627561       1 server.go:265] "[graceful-termination] secure server is exiting"
	E1108 08:36:06.627579       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [fff7042b7c4c7c066f1e1124a3e8968584df4f996134bc8b41a0cc400af6bce3] <==
	I1108 08:36:09.455051       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 08:36:09.455071       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 08:36:09.455069       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 08:36:09.455092       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 08:36:09.455071       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 08:36:09.455536       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 08:36:09.455561       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 08:36:09.555507       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 08:36:09.555642       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 08:36:09.555728       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	E1108 08:36:20.705949       1 reflector.go:205] "Failed to watch" err="csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 08:36:20.706179       1 reflector.go:205] "Failed to watch" err="statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 08:36:20.711390       1 reflector.go:205] "Failed to watch" err="persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 08:36:20.711437       1 reflector.go:205] "Failed to watch" err="resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 08:36:20.711461       1 reflector.go:205] "Failed to watch" err="replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 08:36:20.711484       1 reflector.go:205] "Failed to watch" err="csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 08:36:20.711506       1 reflector.go:205] "Failed to watch" err="services is forbidden: User \"system:kube-scheduler\" cannot watch resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 08:36:20.711508       1 reflector.go:205] "Failed to watch" err="pods is forbidden: User \"system:kube-scheduler\" cannot watch resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 08:36:20.711531       1 reflector.go:205] "Failed to watch" err="replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot watch resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 08:36:20.711549       1 reflector.go:205] "Failed to watch" err="persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot watch resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 08:36:20.711560       1 reflector.go:205] "Failed to watch" err="volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot watch resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 08:36:20.725635       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 08:36:20.725677       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 08:36:20.725694       1 reflector.go:205] "Failed to watch" err="configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot watch resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 08:36:20.725816       1 reflector.go:205] "Failed to watch" err="namespaces is forbidden: User \"system:kube-scheduler\" cannot watch resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	
	
	==> kubelet <==
	Nov 08 08:44:07 functional-096647 kubelet[4268]: E1108 08:44:07.824590    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:44:16 functional-096647 kubelet[4268]: E1108 08:44:16.825509    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:44:20 functional-096647 kubelet[4268]: E1108 08:44:20.824268    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:44:31 functional-096647 kubelet[4268]: E1108 08:44:31.823708    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:44:33 functional-096647 kubelet[4268]: E1108 08:44:33.823572    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:44:42 functional-096647 kubelet[4268]: E1108 08:44:42.824121    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:44:45 functional-096647 kubelet[4268]: E1108 08:44:45.823843    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:44:53 functional-096647 kubelet[4268]: E1108 08:44:53.824072    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:44:56 functional-096647 kubelet[4268]: E1108 08:44:56.823821    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:45:05 functional-096647 kubelet[4268]: E1108 08:45:05.823937    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:45:10 functional-096647 kubelet[4268]: E1108 08:45:10.823897    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:45:20 functional-096647 kubelet[4268]: E1108 08:45:20.824274    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:45:23 functional-096647 kubelet[4268]: E1108 08:45:23.823678    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:45:33 functional-096647 kubelet[4268]: E1108 08:45:33.824609    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:45:38 functional-096647 kubelet[4268]: E1108 08:45:38.824682    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:45:46 functional-096647 kubelet[4268]: E1108 08:45:46.823768    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:45:53 functional-096647 kubelet[4268]: E1108 08:45:53.824600    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:45:59 functional-096647 kubelet[4268]: E1108 08:45:59.823802    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:46:05 functional-096647 kubelet[4268]: E1108 08:46:05.823569    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:46:11 functional-096647 kubelet[4268]: E1108 08:46:11.824563    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:46:20 functional-096647 kubelet[4268]: E1108 08:46:20.824089    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:46:23 functional-096647 kubelet[4268]: E1108 08:46:23.824752    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:46:32 functional-096647 kubelet[4268]: E1108 08:46:32.824606    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	Nov 08 08:46:37 functional-096647 kubelet[4268]: E1108 08:46:37.824468    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-75c85bcc94-6c97z" podUID="0404755f-97fe-43d1-be43-37c2e9037d23"
	Nov 08 08:46:46 functional-096647 kubelet[4268]: E1108 08:46:46.823906    4268 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"echo-server\" with ImagePullBackOff: \"Back-off pulling image \\\"kicbase/echo-server\\\": ErrImagePull: short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list\"" pod="default/hello-node-connect-7d85dfc575-bg4b8" podUID="783b68d0-a5cd-475b-b98a-fe4442115382"
	
	
	==> kubernetes-dashboard [dd634ab08cedf4e5dbe12ccbc9ee79c03b0b1d3fd1014601bb02a5d5eb5246ce] <==
	2025/11/08 08:37:08 Starting overwatch
	2025/11/08 08:37:08 Using namespace: kubernetes-dashboard
	2025/11/08 08:37:08 Using in-cluster config to connect to apiserver
	2025/11/08 08:37:08 Using secret token for csrf signing
	2025/11/08 08:37:08 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 08:37:08 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 08:37:08 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 08:37:08 Generating JWE encryption key
	2025/11/08 08:37:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 08:37:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 08:37:09 Initializing JWE encryption key from synchronized object
	2025/11/08 08:37:09 Creating in-cluster Sidecar client
	2025/11/08 08:37:09 Successful request to sidecar
	2025/11/08 08:37:09 Serving insecurely on HTTP port: 9090
	
	
	==> storage-provisioner [019c2068c6b33b0edef7370efdd17bbc0afc84a9d87dffac12559b881a958cfe] <==
	W1108 08:35:36.490466       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:36.493477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 08:35:36.589333       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-096647_919ba349-5eb6-4ad2-9ddf-21f746028826!
	W1108 08:35:38.496911       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:38.500887       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:40.503444       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:40.508327       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:42.511851       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:42.515675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:44.520608       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:44.526613       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:46.530135       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:46.534976       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:48.538072       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:48.541970       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:50.545246       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:50.549171       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:52.552615       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:52.558142       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:54.560875       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:54.564741       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:56.567983       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:56.571767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:58.574431       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:35:58.578372       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [a02d69c9dc687c256e09096ad1b287a52cc878717474aa7479023dd2c54f2d31] <==
	W1108 08:46:22.618729       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:24.621672       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:24.625477       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:26.628552       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:26.632561       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:28.635474       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:28.640428       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:30.643523       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:30.647901       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:32.651016       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:32.655812       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:34.658822       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:34.663264       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:36.666516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:36.671623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:38.674549       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:38.678468       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:40.681702       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:40.685754       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:42.689238       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:42.692902       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:44.696131       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:44.700919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:46.703457       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 08:46:46.707213       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-096647 -n functional-096647
helpers_test.go:269: (dbg) Run:  kubectl --context functional-096647 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: busybox-mount hello-node-75c85bcc94-6c97z hello-node-connect-7d85dfc575-bg4b8
helpers_test.go:282: ======> post-mortem[TestFunctional/parallel/ServiceCmdConnect]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context functional-096647 describe pod busybox-mount hello-node-75c85bcc94-6c97z hello-node-connect-7d85dfc575-bg4b8
helpers_test.go:290: (dbg) kubectl --context functional-096647 describe pod busybox-mount hello-node-75c85bcc94-6c97z hello-node-connect-7d85dfc575-bg4b8:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-096647/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 08:36:55 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://c3ea446e303f4938c71de8540da8c02b498e3cd053d9a7e287aac1fb52352e0e
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Sat, 08 Nov 2025 08:36:57 +0000
	      Finished:     Sat, 08 Nov 2025 08:36:57 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-8spzm (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-8spzm:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  9m51s  default-scheduler  Successfully assigned default/busybox-mount to functional-096647
	  Normal  Pulling    9m52s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     9m50s  kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 1.46s (1.46s including waiting). Image size: 4631262 bytes.
	  Normal  Created    9m50s  kubelet            Created container: mount-munger
	  Normal  Started    9m50s  kubelet            Started container mount-munger
	
	
	Name:             hello-node-75c85bcc94-6c97z
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-096647/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 08:36:48 +0000
	Labels:           app=hello-node
	                  pod-template-hash=75c85bcc94
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.6
	IPs:
	  IP:           10.244.0.6
	Controlled By:  ReplicaSet/hello-node-75c85bcc94
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2f7rw (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-2f7rw:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                     From               Message
	  ----     ------     ----                    ----               -------
	  Normal   Scheduled  9m59s                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6c97z to functional-096647
	  Normal   Pulling    6m57s (x5 over 9m59s)   kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m57s (x5 over 9m59s)   kubelet            Error: ErrImagePull
	  Normal   BackOff    4m52s (x21 over 9m59s)  kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     4m52s (x21 over 9m59s)  kubelet            Error: ImagePullBackOff
	
	
	Name:             hello-node-connect-7d85dfc575-bg4b8
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-096647/192.168.49.2
	Start Time:       Sat, 08 Nov 2025 08:36:44 +0000
	Labels:           app=hello-node-connect
	                  pod-template-hash=7d85dfc575
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:           10.244.0.5
	Controlled By:  ReplicaSet/hello-node-connect-7d85dfc575
	Containers:
	  echo-server:
	    Container ID:   
	    Image:          kicbase/echo-server
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-7c9ms (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-7c9ms:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    Optional:                false
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  10m                  default-scheduler  Successfully assigned default/hello-node-connect-7d85dfc575-bg4b8 to functional-096647
	  Normal   Pulling    6m58s (x5 over 10m)  kubelet            Pulling image "kicbase/echo-server"
	  Warning  Failed     6m58s (x5 over 10m)  kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
	  Warning  Failed     6m58s (x5 over 10m)  kubelet            Error: ErrImagePull
	  Normal   BackOff    1s (x42 over 10m)    kubelet            Back-off pulling image "kicbase/echo-server"
	  Warning  Failed     1s (x42 over 10m)    kubelet            Error: ImagePullBackOff

                                                
                                                
-- /stdout --
helpers_test.go:293: <<< TestFunctional/parallel/ServiceCmdConnect FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/ServiceCmdConnect (602.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls --format short --alsologtostderr
functional_test.go:276: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 image ls --format short --alsologtostderr: (2.287009375s)
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096647 image ls --format short --alsologtostderr:

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096647 image ls --format short --alsologtostderr:
I1108 08:37:11.287805   48712 out.go:360] Setting OutFile to fd 1 ...
I1108 08:37:11.288089   48712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:11.288099   48712 out.go:374] Setting ErrFile to fd 2...
I1108 08:37:11.288105   48712 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:11.288428   48712 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
I1108 08:37:11.289188   48712 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:11.289372   48712 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:11.289937   48712 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
I1108 08:37:11.312810   48712 ssh_runner.go:195] Run: systemctl --version
I1108 08:37:11.312874   48712 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
I1108 08:37:11.336301   48712 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
I1108 08:37:11.435036   48712 ssh_runner.go:195] Run: sudo crictl images --output json
I1108 08:37:13.463291   48712 ssh_runner.go:235] Completed: sudo crictl images --output json: (2.028209373s)
W1108 08:37:13.463386   48712 cache_images.go:736] Failed to list images for profile functional-096647 crictl images: sudo crictl images --output json: Process exited with status 1
stdout:

                                                
                                                
stderr:
E1108 08:37:13.460627    7208 log.go:32] "ListImages with filter from image service failed" err="rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL" filter="image:{}"
time="2025-11-08T08:37:13Z" level=fatal msg="listing images: rpc error: code = DeadlineExceeded desc = stream terminated by RST_STREAM with error code: CANCEL"
functional_test.go:290: expected registry.k8s.io/pause to be listed with minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageListShort (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:370: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image load --daemon kicbase/echo-server:functional-096647 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-096647" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image load --daemon kicbase/echo-server:functional-096647 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-096647" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (1.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:250: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:255: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-096647
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image load --daemon kicbase/echo-server:functional-096647 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls
functional_test.go:461: expected "kicbase/echo-server:functional-096647" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.27s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:395: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image save kicbase/echo-server:functional-096647 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:401: expected "/home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar" to exist after `image save`, but doesn't exist
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:426: loading image into minikube from file: <nil>

                                                
                                                
** stderr ** 
	I1108 08:36:46.570675   43788 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:36:46.570956   43788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:46.570966   43788 out.go:374] Setting ErrFile to fd 2...
	I1108 08:36:46.570970   43788 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:46.571159   43788 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:36:46.571708   43788 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:36:46.571793   43788 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:36:46.572150   43788 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
	I1108 08:36:46.591558   43788 ssh_runner.go:195] Run: systemctl --version
	I1108 08:36:46.591617   43788 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
	I1108 08:36:46.609123   43788 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
	I1108 08:36:46.699600   43788 cache_images.go:291] Loading image from: /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar
	W1108 08:36:46.699655   43788 cache_images.go:255] Failed to load cached images for "functional-096647": loading images: stat /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar: no such file or directory
	I1108 08:36:46.699677   43788 cache_images.go:267] failed pushing to: functional-096647

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:434: (dbg) Run:  docker rmi kicbase/echo-server:functional-096647
functional_test.go:439: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image save --daemon kicbase/echo-server:functional-096647 --alsologtostderr
functional_test.go:447: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-096647
functional_test.go:447: (dbg) Non-zero exit: docker image inspect localhost/kicbase/echo-server:functional-096647: exit status 1 (17.212104ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-096647

                                                
                                                
** /stderr **
functional_test.go:449: expected image to be loaded into Docker, but image was not found: exit status 1

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: No such image: localhost/kicbase/echo-server:functional-096647

                                                
                                                
** /stderr **
--- FAIL: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1451: (dbg) Run:  kubectl --context functional-096647 create deployment hello-node --image kicbase/echo-server
functional_test.go:1455: (dbg) Run:  kubectl --context functional-096647 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1460: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:352: "hello-node-75c85bcc94-6c97z" [0404755f-97fe-43d1-be43-37c2e9037d23] Pending / Ready:ContainersNotReady (containers with unready status: [echo-server]) / ContainersReady:ContainersNotReady (containers with unready status: [echo-server])
helpers_test.go:337: TestFunctional/parallel/ServiceCmd/DeployApp: WARNING: pod list for "default" "app=hello-node" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test.go:1460: ***** TestFunctional/parallel/ServiceCmd/DeployApp: pod "app=hello-node" failed to start within 10m0s: context deadline exceeded ****
functional_test.go:1460: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p functional-096647 -n functional-096647
functional_test.go:1460: TestFunctional/parallel/ServiceCmd/DeployApp: showing logs for failed pods as of 2025-11-08 08:46:48.394761275 +0000 UTC m=+1082.880964313
functional_test.go:1460: (dbg) Run:  kubectl --context functional-096647 describe po hello-node-75c85bcc94-6c97z -n default
functional_test.go:1460: (dbg) kubectl --context functional-096647 describe po hello-node-75c85bcc94-6c97z -n default:
Name:             hello-node-75c85bcc94-6c97z
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-096647/192.168.49.2
Start Time:       Sat, 08 Nov 2025 08:36:48 +0000
Labels:           app=hello-node
pod-template-hash=75c85bcc94
Annotations:      <none>
Status:           Pending
IP:               10.244.0.6
IPs:
IP:           10.244.0.6
Controlled By:  ReplicaSet/hello-node-75c85bcc94
Containers:
echo-server:
Container ID:   
Image:          kicbase/echo-server
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2f7rw (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
kube-api-access-2f7rw:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
Optional:                false
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                   From               Message
----     ------     ----                  ----               -------
Normal   Scheduled  10m                   default-scheduler  Successfully assigned default/hello-node-75c85bcc94-6c97z to functional-096647
Normal   Pulling    6m58s (x5 over 10m)   kubelet            Pulling image "kicbase/echo-server"
Warning  Failed     6m58s (x5 over 10m)   kubelet            Failed to pull image "kicbase/echo-server": short name mode is enforcing, but image name kicbase/echo-server:latest returns ambiguous list
Warning  Failed     6m58s (x5 over 10m)   kubelet            Error: ErrImagePull
Normal   BackOff    4m53s (x21 over 10m)  kubelet            Back-off pulling image "kicbase/echo-server"
Warning  Failed     4m53s (x21 over 10m)  kubelet            Error: ImagePullBackOff
functional_test.go:1460: (dbg) Run:  kubectl --context functional-096647 logs hello-node-75c85bcc94-6c97z -n default
functional_test.go:1460: (dbg) Non-zero exit: kubectl --context functional-096647 logs hello-node-75c85bcc94-6c97z -n default: exit status 1 (59.603166ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "echo-server" in pod "hello-node-75c85bcc94-6c97z" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test.go:1460: kubectl --context functional-096647 logs hello-node-75c85bcc94-6c97z -n default: exit status 1
functional_test.go:1461: failed waiting for hello-node pod: app=hello-node within 10m0s: context deadline exceeded
--- FAIL: TestFunctional/parallel/ServiceCmd/DeployApp (600.55s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1519: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 service --namespace=default --https --url hello-node
functional_test.go:1519: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 service --namespace=default --https --url hello-node: exit status 115 (523.966304ms)

                                                
                                                
-- stdout --
	https://192.168.49.2:30265
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_3af0dd3f106bd0c134df3d834cbdbb288a06d35d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1521: failed to get service url. args "out/minikube-linux-amd64 -p functional-096647 service --namespace=default --https --url hello-node" : exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1550: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 service hello-node --url --format={{.IP}}
functional_test.go:1550: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 service hello-node --url --format={{.IP}}: exit status 115 (529.485642ms)

                                                
                                                
-- stdout --
	192.168.49.2
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1552: failed to get service url with custom format. args "out/minikube-linux-amd64 -p functional-096647 service hello-node --url --format={{.IP}}": exit status 115
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1569: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 service hello-node --url
functional_test.go:1569: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 service hello-node --url: exit status 115 (526.501292ms)

                                                
                                                
-- stdout --
	http://192.168.49.2:30265
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service hello-node found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_7cc4328ee572bf2be3730700e5bda4ff5ee9066f_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:1571: failed to get service url. args: "out/minikube-linux-amd64 -p functional-096647 service hello-node --url": exit status 115
functional_test.go:1575: found endpoint for hello-node: http://192.168.49.2:30265
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (0.53s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (2.14s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-882453 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p json-output-882453 --output=json --user=testUser: exit status 80 (2.139666785s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"619f6eaa-e845-4419-8095-e0c737ae46fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Pausing node json-output-882453 ...","name":"Pausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"1b6750d3-a4cb-4053-95da-6b9d51fd194f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list running: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T08:56:37Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_PAUSE","url":""}}
	{"specversion":"1.0","id":"2d294002-3335-4c0a-849e-1b1b0b8fc61e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 pause -p json-output-882453 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/pause/Command (2.14s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (1.7s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-882453 --output=json --user=testUser
json_output_test.go:63: (dbg) Non-zero exit: out/minikube-linux-amd64 unpause -p json-output-882453 --output=json --user=testUser: exit status 80 (1.696531095s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"0d382026-f444-4877-be37-d6b33292ed0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"Unpausing node json-output-882453 ...","name":"Unpausing","totalsteps":"1"}}
	{"specversion":"1.0","id":"754286dd-4ce7-4688-9d15-9573c015063d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"80","issues":"","message":"Pause: list paused: runc: sudo runc list -f json: Process exited with status 1\nstdout:\n\nstderr:\ntime=\"2025-11-08T08:56:39Z\" level=error msg=\"open /run/runc: no such file or directory\"","name":"GUEST_UNPAUSE","url":""}}
	{"specversion":"1.0","id":"92e91731-b5e0-4c07-a8a6-3e456926221c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"message":"╭───────────────────────────────────────────────────────────────────────────────────────────╮\n│                                                                                           │\n│    If the above advice does not help, please let us know:                                 │\n│    https://github.com/kubernetes/minikube/issues/new/choose                               │\n│                                                                                           │\n│    Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │\n│    Please also attach the following f
ile to the GitHub issue:                             │\n│    - /tmp/minikube_unpause_85c908ac827001a7ced33feb0caf7da086d17584_0.log                 │\n│                                                                                           │\n╰───────────────────────────────────────────────────────────────────────────────────────────╯"}}

                                                
                                                
-- /stdout --
json_output_test.go:65: failed to clean up: args "out/minikube-linux-amd64 unpause -p json-output-882453 --output=json --user=testUser": exit status 80
--- FAIL: TestJSONOutput/unpause/Command (1.70s)

                                                
                                    
x
+
TestPause/serial/Pause (5.71s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-322482 --alsologtostderr -v=5
pause_test.go:110: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p pause-322482 --alsologtostderr -v=5: exit status 80 (2.380912996s)

                                                
                                                
-- stdout --
	* Pausing node pause-322482 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:12:03.025231  229939 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:12:03.025507  229939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:03.025519  229939 out.go:374] Setting ErrFile to fd 2...
	I1108 09:12:03.025524  229939 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:12:03.025690  229939 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:12:03.025912  229939 out.go:368] Setting JSON to false
	I1108 09:12:03.025956  229939 mustload.go:66] Loading cluster: pause-322482
	I1108 09:12:03.026326  229939 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:03.026709  229939 cli_runner.go:164] Run: docker container inspect pause-322482 --format={{.State.Status}}
	I1108 09:12:03.044821  229939 host.go:66] Checking if "pause-322482" exists ...
	I1108 09:12:03.045166  229939 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:12:03.098231  229939 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:12:03.088696125 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:12:03.098852  229939 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:pause-322482 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true) want
virtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:12:03.100786  229939 out.go:179] * Pausing node pause-322482 ... 
	I1108 09:12:03.102016  229939 host.go:66] Checking if "pause-322482" exists ...
	I1108 09:12:03.102423  229939 ssh_runner.go:195] Run: systemctl --version
	I1108 09:12:03.102460  229939 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:12:03.119909  229939 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:12:03.212681  229939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:03.224807  229939 pause.go:52] kubelet running: true
	I1108 09:12:03.224918  229939 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:12:03.364070  229939 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:12:03.364149  229939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:12:03.427156  229939 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:03.427188  229939 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:03.427192  229939 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:03.427195  229939 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:03.427198  229939 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:03.427201  229939 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:03.427203  229939 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:03.427205  229939 cri.go:89] found id: ""
	I1108 09:12:03.427238  229939 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:03.438648  229939 retry.go:31] will retry after 168.746172ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:03Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:12:03.608091  229939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:03.621033  229939 pause.go:52] kubelet running: false
	I1108 09:12:03.621083  229939 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:12:03.735816  229939 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:12:03.735901  229939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:12:03.802725  229939 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:03.802755  229939 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:03.802761  229939 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:03.802766  229939 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:03.802770  229939 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:03.802775  229939 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:03.802780  229939 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:03.802785  229939 cri.go:89] found id: ""
	I1108 09:12:03.802828  229939 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:03.814938  229939 retry.go:31] will retry after 505.559603ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:03Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:12:04.321714  229939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:04.334950  229939 pause.go:52] kubelet running: false
	I1108 09:12:04.335001  229939 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:12:04.450733  229939 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:12:04.450813  229939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:12:04.518383  229939 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:04.518406  229939 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:04.518409  229939 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:04.518412  229939 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:04.518415  229939 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:04.518418  229939 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:04.518421  229939 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:04.518423  229939 cri.go:89] found id: ""
	I1108 09:12:04.518464  229939 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:04.530575  229939 retry.go:31] will retry after 600.134079ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:04Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:12:05.131401  229939 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:05.145667  229939 pause.go:52] kubelet running: false
	I1108 09:12:05.145731  229939 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:12:05.258152  229939 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:12:05.258220  229939 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:12:05.325372  229939 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:05.325399  229939 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:05.325406  229939 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:05.325410  229939 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:05.325416  229939 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:05.325420  229939 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:05.325424  229939 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:05.325429  229939 cri.go:89] found id: ""
	I1108 09:12:05.325482  229939 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:12:05.340478  229939 out.go:203] 
	W1108 09:12:05.341958  229939 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:05Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:05Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:12:05.341981  229939 out.go:285] * 
	* 
	W1108 09:12:05.345924  229939 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:12:05.348140  229939 out.go:203] 

                                                
                                                
** /stderr **
pause_test.go:112: failed to pause minikube with args: "out/minikube-linux-amd64 pause -p pause-322482 --alsologtostderr -v=5" : exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-322482
helpers_test.go:243: (dbg) docker inspect pause-322482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3",
	        "Created": "2025-11-08T09:11:21.921111224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:11:21.961233448Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/hosts",
	        "LogPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3-json.log",
	        "Name": "/pause-322482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-322482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-322482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3",
	                "LowerDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-322482",
	                "Source": "/var/lib/docker/volumes/pause-322482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-322482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-322482",
	                "name.minikube.sigs.k8s.io": "pause-322482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7668ea48e8babf7c5afcc2b316d7cd81e38ab3b4fe1a33ea1d3d3f2a39f666fb",
	            "SandboxKey": "/var/run/docker/netns/7668ea48e8ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-322482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:39:dd:35:ba:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f49ae350ddc4c163fff2777330ce0190d365ddf7c80549d6d0ce21ec674b83b",
	                    "EndpointID": "a9344bed91f7952472c8d9ac24ee8a71b77a0dc9ca5e40c793ef647e146739b8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-322482",
	                        "643d6aaa38a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-322482 -n pause-322482
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-322482 -n pause-322482: exit status 2 (315.746491ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-322482 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-845504 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-004778                                                                                                                                                                                               │ force-systemd-env-004778  │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p cert-expiration-640168 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-640168    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ -p NoKubernetes-845504 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ delete  │ -p NoKubernetes-845504                                                                                                                                                                                                    │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p cert-options-763535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p offline-crio-798164                                                                                                                                                                                                    │ offline-crio-798164       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p running-upgrade-784389 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-784389    │ jenkins │ v1.32.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p missing-upgrade-811715                                                                                                                                                                                                 │ missing-upgrade-811715    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ ssh     │ cert-options-763535 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ -p cert-options-763535 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p running-upgrade-784389 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-784389    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ delete  │ -p cert-options-763535                                                                                                                                                                                                    │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p stopped-upgrade-312782 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-312782    │ jenkins │ v1.32.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-515251                                                                                                                                                                                              │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ delete  │ -p running-upgrade-784389                                                                                                                                                                                                 │ running-upgrade-784389    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ stop    │ stopped-upgrade-312782 stop                                                                                                                                                                                               │ stopped-upgrade-312782    │ jenkins │ v1.32.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p pause-322482 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p stopped-upgrade-312782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-312782    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ delete  │ -p stopped-upgrade-312782                                                                                                                                                                                                 │ stopped-upgrade-312782    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p auto-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-732849               │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ start   │ -p pause-322482 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:12 UTC │
	│ pause   │ -p pause-322482 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:11:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:11:56.878005  228538 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:11:56.878294  228538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:56.878305  228538 out.go:374] Setting ErrFile to fd 2...
	I1108 09:11:56.878311  228538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:56.878523  228538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:11:56.878966  228538 out.go:368] Setting JSON to false
	I1108 09:11:56.880136  228538 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3268,"bootTime":1762589849,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:11:56.880230  228538 start.go:143] virtualization: kvm guest
	I1108 09:11:56.882466  228538 out.go:179] * [pause-322482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:11:56.883809  228538 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:11:56.883822  228538 notify.go:221] Checking for updates...
	I1108 09:11:56.885955  228538 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:11:56.887479  228538 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:11:56.889338  228538 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:11:56.890657  228538 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:11:56.891872  228538 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:11:53.872357  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:11:53.872408  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:11:56.893403  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:56.893964  228538 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:11:56.917997  228538 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:11:56.918076  228538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:11:56.977423  228538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:11:56.967023195 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:11:56.977524  228538 docker.go:319] overlay module found
	I1108 09:11:56.979161  228538 out.go:179] * Using the docker driver based on existing profile
	I1108 09:11:56.980340  228538 start.go:309] selected driver: docker
	I1108 09:11:56.980356  228538 start.go:930] validating driver "docker" against &{Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:11:56.980460  228538 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:11:56.980534  228538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:11:57.037312  228538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:11:57.02659194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:11:57.038004  228538 cni.go:84] Creating CNI manager for ""
	I1108 09:11:57.038063  228538 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:11:57.038129  228538 start.go:353] cluster config:
	{Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:11:57.040212  228538 out.go:179] * Starting "pause-322482" primary control-plane node in "pause-322482" cluster
	I1108 09:11:57.041462  228538 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:11:57.042525  228538 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:11:57.043593  228538 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:11:57.043620  228538 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:11:57.043636  228538 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:11:57.043644  228538 cache.go:59] Caching tarball of preloaded images
	I1108 09:11:57.043794  228538 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:11:57.043818  228538 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:11:57.043995  228538 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/config.json ...
	I1108 09:11:57.065333  228538 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:11:57.065352  228538 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:11:57.065369  228538 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:11:57.065397  228538 start.go:360] acquireMachinesLock for pause-322482: {Name:mkbc3c6e2e0d0256e50f18ec85a056408e079d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:11:57.065462  228538 start.go:364] duration metric: took 43.477µs to acquireMachinesLock for "pause-322482"
	I1108 09:11:57.065484  228538 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:11:57.065493  228538 fix.go:54] fixHost starting: 
	I1108 09:11:57.065687  228538 cli_runner.go:164] Run: docker container inspect pause-322482 --format={{.State.Status}}
	I1108 09:11:57.084335  228538 fix.go:112] recreateIfNeeded on pause-322482: state=Running err=<nil>
	W1108 09:11:57.084363  228538 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:11:57.086304  228538 out.go:252] * Updating the running docker "pause-322482" container ...
	I1108 09:11:57.086335  228538 machine.go:94] provisionDockerMachine start ...
	I1108 09:11:57.086392  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.104859  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.105175  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.105194  228538 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:11:57.233458  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-322482
	
	I1108 09:11:57.233492  228538 ubuntu.go:182] provisioning hostname "pause-322482"
	I1108 09:11:57.233541  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.253115  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.253402  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.253423  228538 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-322482 && echo "pause-322482" | sudo tee /etc/hostname
	I1108 09:11:57.393684  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-322482
	
	I1108 09:11:57.393761  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.412390  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.412600  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.412616  228538 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-322482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-322482/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-322482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:11:57.544212  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:11:57.544243  228538 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:11:57.544267  228538 ubuntu.go:190] setting up certificates
	I1108 09:11:57.544292  228538 provision.go:84] configureAuth start
	I1108 09:11:57.544358  228538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-322482
	I1108 09:11:57.562399  228538 provision.go:143] copyHostCerts
	I1108 09:11:57.562458  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:11:57.562471  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:11:57.562542  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:11:57.562697  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:11:57.562708  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:11:57.562740  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:11:57.562810  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:11:57.562818  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:11:57.562841  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:11:57.562916  228538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.pause-322482 san=[127.0.0.1 192.168.76.2 localhost minikube pause-322482]
	I1108 09:11:57.936476  228538 provision.go:177] copyRemoteCerts
	I1108 09:11:57.936529  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:11:57.936560  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.955915  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.050418  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:11:58.067944  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:11:58.085437  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:11:58.103233  228538 provision.go:87] duration metric: took 558.926995ms to configureAuth
	I1108 09:11:58.103265  228538 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:11:58.103483  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:58.103567  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.122343  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:58.122555  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:58.122574  228538 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:11:58.424113  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:11:58.424140  228538 machine.go:97] duration metric: took 1.337797414s to provisionDockerMachine
	I1108 09:11:58.424154  228538 start.go:293] postStartSetup for "pause-322482" (driver="docker")
	I1108 09:11:58.424167  228538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:11:58.424223  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:11:58.424262  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.444616  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.540186  228538 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:11:58.543820  228538 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:11:58.543846  228538 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:11:58.543856  228538 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:11:58.543915  228538 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:11:58.543983  228538 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:11:58.544071  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:11:58.551928  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:11:58.570473  228538 start.go:296] duration metric: took 146.288877ms for postStartSetup
	I1108 09:11:58.570556  228538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:11:58.570602  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.588892  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.681611  228538 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:11:58.686572  228538 fix.go:56] duration metric: took 1.621069575s for fixHost
	I1108 09:11:58.686604  228538 start.go:83] releasing machines lock for "pause-322482", held for 1.621129312s
	I1108 09:11:58.686681  228538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-322482
	I1108 09:11:58.705884  228538 ssh_runner.go:195] Run: cat /version.json
	I1108 09:11:58.705927  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.705987  228538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:11:58.706056  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.725944  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.726403  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.818658  228538 ssh_runner.go:195] Run: systemctl --version
	I1108 09:11:58.874116  228538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:11:58.910675  228538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:11:58.915627  228538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:11:58.915700  228538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:11:58.923932  228538 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:11:58.923957  228538 start.go:496] detecting cgroup driver to use...
	I1108 09:11:58.923985  228538 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:11:58.924022  228538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:11:58.938847  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:11:58.952674  228538 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:11:58.952728  228538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:11:58.967899  228538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:11:58.980315  228538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:11:59.086389  228538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:11:59.196469  228538 docker.go:234] disabling docker service ...
	I1108 09:11:59.196542  228538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:11:59.211704  228538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:11:59.224646  228538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:11:59.341406  228538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:11:59.464081  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:11:59.476977  228538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:11:59.491411  228538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:11:59.491468  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.502447  228538 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:11:59.502513  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.512263  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.521087  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.530298  228538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:11:59.538743  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.548993  228538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.563085  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.576086  228538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:11:59.583556  228538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:11:59.591206  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:11:59.701644  228538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:11:59.861494  228538 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:11:59.861556  228538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:11:59.866203  228538 start.go:564] Will wait 60s for crictl version
	I1108 09:11:59.866268  228538 ssh_runner.go:195] Run: which crictl
	I1108 09:11:59.870151  228538 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:11:59.895324  228538 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:11:59.895417  228538 ssh_runner.go:195] Run: crio --version
	I1108 09:11:59.923560  228538 ssh_runner.go:195] Run: crio --version
	I1108 09:11:59.954199  228538 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:11:55.452264  225578 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:11:55.456573  225578 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:11:55.456588  225578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:11:55.469659  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:11:55.675820  225578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:11:55.675916  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:55.675963  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-732849 minikube.k8s.io/updated_at=2025_11_08T09_11_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=auto-732849 minikube.k8s.io/primary=true
	I1108 09:11:55.758668  225578 ops.go:34] apiserver oom_adj: -16
	I1108 09:11:55.758675  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:56.258795  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:56.759487  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:57.258899  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:57.759466  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:58.259307  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:58.759429  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:59.258823  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:59.758791  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.259498  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.759383  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.825466  225578 kubeadm.go:1114] duration metric: took 5.149606335s to wait for elevateKubeSystemPrivileges
	I1108 09:12:00.825505  225578 kubeadm.go:403] duration metric: took 14.795722819s to StartCluster
	I1108 09:12:00.825528  225578 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.825597  225578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:12:00.827063  225578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.827336  225578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:12:00.827375  225578 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:12:00.827432  225578 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:12:00.827516  225578 addons.go:70] Setting storage-provisioner=true in profile "auto-732849"
	I1108 09:12:00.827533  225578 addons.go:239] Setting addon storage-provisioner=true in "auto-732849"
	I1108 09:12:00.827533  225578 addons.go:70] Setting default-storageclass=true in profile "auto-732849"
	I1108 09:12:00.827562  225578 host.go:66] Checking if "auto-732849" exists ...
	I1108 09:12:00.827613  225578 config.go:182] Loaded profile config "auto-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:00.827563  225578 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-732849"
	I1108 09:12:00.828035  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.828131  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.829206  225578 out.go:179] * Verifying Kubernetes components...
	I1108 09:12:00.830519  225578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:00.854029  225578 addons.go:239] Setting addon default-storageclass=true in "auto-732849"
	I1108 09:12:00.854067  225578 host.go:66] Checking if "auto-732849" exists ...
	I1108 09:12:00.854541  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.858426  225578 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:11:59.955533  228538 cli_runner.go:164] Run: docker network inspect pause-322482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:11:59.974511  228538 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:11:59.978865  228538 kubeadm.go:884] updating cluster {Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:11:59.979015  228538 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:11:59.979089  228538 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:12:00.010809  228538 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:12:00.010831  228538 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:12:00.010876  228538 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:12:00.035950  228538 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:12:00.035972  228538 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:12:00.035979  228538 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:12:00.036085  228538 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-322482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:12:00.036145  228538 ssh_runner.go:195] Run: crio config
	I1108 09:12:00.080910  228538 cni.go:84] Creating CNI manager for ""
	I1108 09:12:00.080931  228538 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:12:00.080953  228538 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:12:00.080975  228538 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-322482 NodeName:pause-322482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:12:00.081100  228538 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-322482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:12:00.081158  228538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:12:00.089618  228538 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:12:00.089684  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:12:00.097718  228538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:12:00.109915  228538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:12:00.122650  228538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 09:12:00.135008  228538 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:12:00.138856  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:00.258794  228538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:00.272718  228538 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482 for IP: 192.168.76.2
	I1108 09:12:00.272745  228538 certs.go:195] generating shared ca certs ...
	I1108 09:12:00.272766  228538 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.272927  228538 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:12:00.273000  228538 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:12:00.273018  228538 certs.go:257] generating profile certs ...
	I1108 09:12:00.273138  228538 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key
	I1108 09:12:00.273226  228538 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.key.9467e21f
	I1108 09:12:00.273351  228538 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.key
	I1108 09:12:00.273507  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:12:00.273549  228538 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:12:00.273574  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:12:00.273607  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:12:00.273638  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:12:00.273667  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:12:00.273723  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:12:00.274593  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:12:00.294400  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:12:00.314468  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:12:00.334229  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:12:00.355920  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:12:00.375812  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:12:00.394208  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:12:00.412179  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:12:00.429490  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:12:00.449930  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:12:00.469434  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:12:00.487211  228538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:12:00.500455  228538 ssh_runner.go:195] Run: openssl version
	I1108 09:12:00.506659  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:12:00.514896  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.518674  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.518725  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.556493  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:12:00.564876  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:12:00.573525  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.577402  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.577457  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.622970  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:12:00.632402  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:12:00.642048  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.646205  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.646263  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.684909  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:12:00.694519  228538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:12:00.699212  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:12:00.743700  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:12:00.781190  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:12:00.820052  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:12:00.874190  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:12:00.929460  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:12:00.988614  228538 kubeadm.go:401] StartCluster: {Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:12:00.988807  228538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:00.988882  228538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:01.029272  228538 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:01.029353  228538 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:01.029360  228538 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:01.029366  228538 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:01.029379  228538 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:01.029384  228538 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:01.029389  228538 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:01.029394  228538 cri.go:89] found id: ""
	I1108 09:12:01.029449  228538 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:12:01.045874  228538 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:01Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:12:01.045974  228538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:12:01.057573  228538 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:12:01.057613  228538 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:12:01.057662  228538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:12:01.067605  228538 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:12:01.068704  228538 kubeconfig.go:125] found "pause-322482" server: "https://192.168.76.2:8443"
	I1108 09:12:01.070505  228538 kapi.go:59] client config for pause-322482: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:12:01.071097  228538 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 09:12:01.071117  228538 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 09:12:01.071124  228538 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 09:12:01.071130  228538 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 09:12:01.071136  228538 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 09:12:01.071546  228538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:12:01.082540  228538 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:12:01.082568  228538 kubeadm.go:602] duration metric: took 24.949444ms to restartPrimaryControlPlane
	I1108 09:12:01.082576  228538 kubeadm.go:403] duration metric: took 93.973541ms to StartCluster
	I1108 09:12:01.082589  228538 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:01.082651  228538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:12:01.083919  228538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:01.084226  228538 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:12:01.084312  228538 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:12:01.084502  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:01.087510  228538 out.go:179] * Enabled addons: 
	I1108 09:12:01.087650  228538 out.go:179] * Verifying Kubernetes components...
	I1108 09:12:00.859995  225578 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:12:00.860017  225578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:12:00.860123  225578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-732849
	I1108 09:12:00.884862  225578 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:12:00.884890  225578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:12:00.884955  225578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-732849
	I1108 09:12:00.899351  225578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/auto-732849/id_rsa Username:docker}
	I1108 09:12:00.908596  225578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/auto-732849/id_rsa Username:docker}
	I1108 09:12:00.922160  225578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:12:00.991042  225578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:01.017211  225578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:12:01.021424  225578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:12:01.136552  225578 node_ready.go:35] waiting up to 15m0s for node "auto-732849" to be "Ready" ...
	I1108 09:12:01.139546  225578 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:12:01.356230  225578 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:12:01.089215  228538 addons.go:515] duration metric: took 4.903692ms for enable addons: enabled=[]
	I1108 09:12:01.089254  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:01.232193  228538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:01.245760  228538 node_ready.go:35] waiting up to 6m0s for node "pause-322482" to be "Ready" ...
	I1108 09:12:01.254218  228538 node_ready.go:49] node "pause-322482" is "Ready"
	I1108 09:12:01.254250  228538 node_ready.go:38] duration metric: took 8.460477ms for node "pause-322482" to be "Ready" ...
	I1108 09:12:01.254265  228538 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:12:01.254344  228538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:12:01.266443  228538 api_server.go:72] duration metric: took 182.179786ms to wait for apiserver process to appear ...
	I1108 09:12:01.266470  228538 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:12:01.266489  228538 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:12:01.270569  228538 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:12:01.271524  228538 api_server.go:141] control plane version: v1.34.1
	I1108 09:12:01.271546  228538 api_server.go:131] duration metric: took 5.070437ms to wait for apiserver health ...
	I1108 09:12:01.271553  228538 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:12:01.274966  228538 system_pods.go:59] 7 kube-system pods found
	I1108 09:12:01.274995  228538 system_pods.go:61] "coredns-66bc5c9577-8h2lz" [551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e] Running
	I1108 09:12:01.275000  228538 system_pods.go:61] "etcd-pause-322482" [047c2f99-089f-4c34-b846-5d78c12d0655] Running
	I1108 09:12:01.275005  228538 system_pods.go:61] "kindnet-tst5j" [d014725c-c216-4b28-8694-a753f2d87b87] Running
	I1108 09:12:01.275011  228538 system_pods.go:61] "kube-apiserver-pause-322482" [0b42403c-e150-4405-923c-7a7c6cba26d9] Running
	I1108 09:12:01.275017  228538 system_pods.go:61] "kube-controller-manager-pause-322482" [bbd64ee3-f85a-486a-b3a7-1d66cdc9a947] Running
	I1108 09:12:01.275023  228538 system_pods.go:61] "kube-proxy-tbffl" [3e9e8a05-5439-48fb-9217-e23f242c9789] Running
	I1108 09:12:01.275031  228538 system_pods.go:61] "kube-scheduler-pause-322482" [ac7ac59d-b3ba-4179-ba5b-e8d04e54d1c9] Running
	I1108 09:12:01.275039  228538 system_pods.go:74] duration metric: took 3.479026ms to wait for pod list to return data ...
	I1108 09:12:01.275052  228538 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:12:01.276984  228538 default_sa.go:45] found service account: "default"
	I1108 09:12:01.277000  228538 default_sa.go:55] duration metric: took 1.943285ms for default service account to be created ...
	I1108 09:12:01.277008  228538 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:12:01.279701  228538 system_pods.go:86] 7 kube-system pods found
	I1108 09:12:01.279723  228538 system_pods.go:89] "coredns-66bc5c9577-8h2lz" [551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e] Running
	I1108 09:12:01.279728  228538 system_pods.go:89] "etcd-pause-322482" [047c2f99-089f-4c34-b846-5d78c12d0655] Running
	I1108 09:12:01.279732  228538 system_pods.go:89] "kindnet-tst5j" [d014725c-c216-4b28-8694-a753f2d87b87] Running
	I1108 09:12:01.279735  228538 system_pods.go:89] "kube-apiserver-pause-322482" [0b42403c-e150-4405-923c-7a7c6cba26d9] Running
	I1108 09:12:01.279739  228538 system_pods.go:89] "kube-controller-manager-pause-322482" [bbd64ee3-f85a-486a-b3a7-1d66cdc9a947] Running
	I1108 09:12:01.279743  228538 system_pods.go:89] "kube-proxy-tbffl" [3e9e8a05-5439-48fb-9217-e23f242c9789] Running
	I1108 09:12:01.279748  228538 system_pods.go:89] "kube-scheduler-pause-322482" [ac7ac59d-b3ba-4179-ba5b-e8d04e54d1c9] Running
	I1108 09:12:01.279755  228538 system_pods.go:126] duration metric: took 2.742856ms to wait for k8s-apps to be running ...
	I1108 09:12:01.279767  228538 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:12:01.279810  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:01.294135  228538 system_svc.go:56] duration metric: took 14.35867ms WaitForService to wait for kubelet
	I1108 09:12:01.294166  228538 kubeadm.go:587] duration metric: took 209.906788ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:12:01.294187  228538 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:12:01.297359  228538 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:12:01.297384  228538 node_conditions.go:123] node cpu capacity is 8
	I1108 09:12:01.297395  228538 node_conditions.go:105] duration metric: took 3.202379ms to run NodePressure ...
	I1108 09:12:01.297405  228538 start.go:242] waiting for startup goroutines ...
	I1108 09:12:01.297414  228538 start.go:247] waiting for cluster config update ...
	I1108 09:12:01.297424  228538 start.go:256] writing updated cluster config ...
	I1108 09:12:01.297752  228538 ssh_runner.go:195] Run: rm -f paused
	I1108 09:12:01.301899  228538 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:12:01.302924  228538 kapi.go:59] client config for pause-322482: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:12:01.306172  228538 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8h2lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.310953  228538 pod_ready.go:94] pod "coredns-66bc5c9577-8h2lz" is "Ready"
	I1108 09:12:01.310982  228538 pod_ready.go:86] duration metric: took 4.787483ms for pod "coredns-66bc5c9577-8h2lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.313167  228538 pod_ready.go:83] waiting for pod "etcd-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.318192  228538 pod_ready.go:94] pod "etcd-pause-322482" is "Ready"
	I1108 09:12:01.318225  228538 pod_ready.go:86] duration metric: took 5.038568ms for pod "etcd-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.320913  228538 pod_ready.go:83] waiting for pod "kube-apiserver-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.326839  228538 pod_ready.go:94] pod "kube-apiserver-pause-322482" is "Ready"
	I1108 09:12:01.326864  228538 pod_ready.go:86] duration metric: took 5.927135ms for pod "kube-apiserver-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.329399  228538 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.706742  228538 pod_ready.go:94] pod "kube-controller-manager-pause-322482" is "Ready"
	I1108 09:12:01.706780  228538 pod_ready.go:86] duration metric: took 377.350138ms for pod "kube-controller-manager-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:58.874077  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:11:58.874126  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:01.906797  228538 pod_ready.go:83] waiting for pod "kube-proxy-tbffl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.306596  228538 pod_ready.go:94] pod "kube-proxy-tbffl" is "Ready"
	I1108 09:12:02.306622  228538 pod_ready.go:86] duration metric: took 399.800524ms for pod "kube-proxy-tbffl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.506468  228538 pod_ready.go:83] waiting for pod "kube-scheduler-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.906791  228538 pod_ready.go:94] pod "kube-scheduler-pause-322482" is "Ready"
	I1108 09:12:02.906821  228538 pod_ready.go:86] duration metric: took 400.323338ms for pod "kube-scheduler-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.906835  228538 pod_ready.go:40] duration metric: took 1.604902811s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:12:02.947942  228538 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:12:02.949669  228538 out.go:179] * Done! kubectl is now configured to use "pause-322482" cluster and "default" namespace by default
	I1108 09:12:01.357181  225578 addons.go:515] duration metric: took 529.754115ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:12:01.643261  225578 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-732849" context rescaled to 1 replicas
	W1108 09:12:03.139890  225578 node_ready.go:57] node "auto-732849" has "Ready":"False" status (will retry)
	
	
	==> CRI-O <==
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.796340702Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797310034Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797331429Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797344396Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.798048934Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.798064236Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802092916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802115051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802742378Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.803227046Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.803302002Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.809676427Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.85379259Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-8h2lz Namespace:kube-system ID:1411c273490ebf654cf4f5ddc0f1f416f77ee794e4a297aa12605712b4fe0b4d UID:551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e NetNS:/var/run/netns/fc51d3d5-fb6d-474d-a983-e6c444f88a6c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520290}] Aliases:map[]}"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854032942Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-8h2lz for CNI network kindnet (type=ptp)"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854609324Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854650701Z" level=info msg="Starting seccomp notifier watcher"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854703515Z" level=info msg="Create NRI interface"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854821233Z" level=info msg="built-in NRI default validator is disabled"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854837303Z" level=info msg="runtime interface created"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854852078Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854860991Z" level=info msg="runtime interface starting up..."
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854868786Z" level=info msg="starting plugins..."
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854885234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.855393616Z" level=info msg="No systemd watchdog enabled"
	Nov 08 09:11:59 pause-322482 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	895c5f7f119d6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   11 seconds ago      Running             coredns                   0                   1411c273490eb       coredns-66bc5c9577-8h2lz               kube-system
	91979a7219cae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   22 seconds ago      Running             kube-proxy                0                   0749049a1ab72       kube-proxy-tbffl                       kube-system
	a30c2c1d7897e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   22 seconds ago      Running             kindnet-cni               0                   dfed9a62e530d       kindnet-tst5j                          kube-system
	1dc91bd79da78       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   33 seconds ago      Running             kube-apiserver            0                   e202d63cc2de1       kube-apiserver-pause-322482            kube-system
	38076341860d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   33 seconds ago      Running             kube-controller-manager   0                   9435ca79159a0       kube-controller-manager-pause-322482   kube-system
	3891b7663519a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   33 seconds ago      Running             etcd                      0                   07cdf4a1cc6ff       etcd-pause-322482                      kube-system
	353c9ecf18e5b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   33 seconds ago      Running             kube-scheduler            0                   0d2cd33f6bfdf       kube-scheduler-pause-322482            kube-system
	
	
	==> coredns [895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56180 - 22928 "HINFO IN 5198234154057578563.4147425357032121014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.419215411s
	
	
	==> describe nodes <==
	Name:               pause-322482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-322482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-322482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_11_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:11:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-322482
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:11:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-322482
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a3f8045d-1e00-45e0-945a-6624eab8a9bc
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8h2lz                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     23s
	  kube-system                 etcd-pause-322482                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-tst5j                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      23s
	  kube-system                 kube-apiserver-pause-322482             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-pause-322482    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-tbffl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         23s
	  kube-system                 kube-scheduler-pause-322482             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 22s   kube-proxy       
	  Normal  Starting                 29s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s   kubelet          Node pause-322482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s   kubelet          Node pause-322482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s   kubelet          Node pause-322482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           24s   node-controller  Node pause-322482 event: Registered Node pause-322482 in Controller
	  Normal  NodeReady                12s   kubelet          Node pause-322482 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.084884] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.205659] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 8 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.054730] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +2.047820] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +4.031573] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +8.127109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[Nov 8 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	
	
	==> etcd [3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb] <==
	{"level":"info","ts":"2025-11-08T09:11:39.234740Z","caller":"traceutil/trace.go:172","msg":"trace[356604010] transaction","detail":"{read_only:false; number_of_response:0; response_revision:264; }","duration":"344.729876ms","start":"2025-11-08T09:11:38.890004Z","end":"2025-11-08T09:11:39.234734Z","steps":["trace[356604010] 'process raft request'  (duration: 344.556307ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.234780Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.889992Z","time spent":"344.767941ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-322482\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-322482\" value_size:4321 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.234583Z","caller":"traceutil/trace.go:172","msg":"trace[1252827889] transaction","detail":"{read_only:false; number_of_response:0; response_revision:264; }","duration":"344.602955ms","start":"2025-11-08T09:11:38.889970Z","end":"2025-11-08T09:11:39.234573Z","steps":["trace[1252827889] 'process raft request'  (duration: 344.55678ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.235116Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.889955Z","time spent":"345.111648ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-322482\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-322482\" value_size:6193 >> failure:<>"}
	{"level":"warn","ts":"2025-11-08T09:11:39.234869Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.908616Z","time spent":"326.048876ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4960,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" mod_revision:228 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" value_size:4898 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" > >"}
	{"level":"warn","ts":"2025-11-08T09:11:39.621699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.658876ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356504945783439 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:11:39.621940Z","caller":"traceutil/trace.go:172","msg":"trace[1715006353] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"382.483034ms","start":"2025-11-08T09:11:39.239435Z","end":"2025-11-08T09:11:39.621919Z","steps":["trace[1715006353] 'process raft request'  (duration: 125.54116ms)","trace[1715006353] 'compare'  (duration: 256.539984ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:11:39.622079Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.239421Z","time spent":"382.610755ms","remote":"127.0.0.1:55668","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":201,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" value_size:130 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.622221Z","caller":"traceutil/trace.go:172","msg":"trace[1300107591] transaction","detail":"{read_only:false; response_revision:268; number_of_response:1; }","duration":"379.118572ms","start":"2025-11-08T09:11:39.243091Z","end":"2025-11-08T09:11:39.622209Z","steps":["trace[1300107591] 'process raft request'  (duration: 379.034154ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.622335Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.243076Z","time spent":"379.188918ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7268,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" mod_revision:242 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" value_size:7197 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" > >"}
	{"level":"info","ts":"2025-11-08T09:11:39.622339Z","caller":"traceutil/trace.go:172","msg":"trace[105450183] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"382.604224ms","start":"2025-11-08T09:11:39.239721Z","end":"2025-11-08T09:11:39.622325Z","steps":["trace[105450183] 'process raft request'  (duration: 382.063727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.622408Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.239709Z","time spent":"382.65986ms","remote":"127.0.0.1:55976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kindnet\" value_size:799 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.854580Z","caller":"traceutil/trace.go:172","msg":"trace[1659113502] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"132.902079ms","start":"2025-11-08T09:11:39.721659Z","end":"2025-11-08T09:11:39.854561Z","steps":["trace[1659113502] 'process raft request'  (duration: 132.390821ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:39.937425Z","caller":"traceutil/trace.go:172","msg":"trace[846374865] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"214.966676ms","start":"2025-11-08T09:11:39.722438Z","end":"2025-11-08T09:11:39.937404Z","steps":["trace[846374865] 'process raft request'  (duration: 214.883681ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:39.937449Z","caller":"traceutil/trace.go:172","msg":"trace[1752848594] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"215.187956ms","start":"2025-11-08T09:11:39.722242Z","end":"2025-11-08T09:11:39.937430Z","steps":["trace[1752848594] 'process raft request'  (duration: 214.959204ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:40.109611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.777094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-322482\" limit:1 ","response":"range_response_count:1 size:4811"}
	{"level":"info","ts":"2025-11-08T09:11:40.109740Z","caller":"traceutil/trace.go:172","msg":"trace[1194401422] range","detail":"{range_begin:/registry/minions/pause-322482; range_end:; response_count:1; response_revision:274; }","duration":"100.911888ms","start":"2025-11-08T09:11:40.008809Z","end":"2025-11-08T09:11:40.109720Z","steps":["trace[1194401422] 'agreement among raft nodes before linearized reading'  (duration: 61.207936ms)","trace[1194401422] 'range keys from in-memory index tree'  (duration: 39.46836ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.109870Z","caller":"traceutil/trace.go:172","msg":"trace[1161015891] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"102.743464ms","start":"2025-11-08T09:11:40.007113Z","end":"2025-11-08T09:11:40.109856Z","steps":["trace[1161015891] 'process raft request'  (duration: 102.70609ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.110052Z","caller":"traceutil/trace.go:172","msg":"trace[1935287880] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"164.12605ms","start":"2025-11-08T09:11:39.945913Z","end":"2025-11-08T09:11:40.110039Z","steps":["trace[1935287880] 'process raft request'  (duration: 163.858778ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.110097Z","caller":"traceutil/trace.go:172","msg":"trace[650428140] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"167.02331ms","start":"2025-11-08T09:11:39.943057Z","end":"2025-11-08T09:11:40.110080Z","steps":["trace[650428140] 'process raft request'  (duration: 127.000767ms)","trace[650428140] 'compare'  (duration: 39.592815ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.301368Z","caller":"traceutil/trace.go:172","msg":"trace[1947465739] linearizableReadLoop","detail":"{readStateIndex:290; appliedIndex:290; }","duration":"121.014824ms","start":"2025-11-08T09:11:40.180332Z","end":"2025-11-08T09:11:40.301347Z","steps":["trace[1947465739] 'read index received'  (duration: 121.006361ms)","trace[1947465739] 'applied index is now lower than readState.Index'  (duration: 7.181µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:11:40.398055Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.697344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:40.398139Z","caller":"traceutil/trace.go:172","msg":"trace[775499133] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:278; }","duration":"217.800926ms","start":"2025-11-08T09:11:40.180321Z","end":"2025-11-08T09:11:40.398122Z","steps":["trace[775499133] 'agreement among raft nodes before linearized reading'  (duration: 121.096688ms)","trace[775499133] 'range keys from in-memory index tree'  (duration: 96.564012ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.398195Z","caller":"traceutil/trace.go:172","msg":"trace[274750649] transaction","detail":"{read_only:false; response_revision:280; number_of_response:1; }","duration":"235.35411ms","start":"2025-11-08T09:11:40.162829Z","end":"2025-11-08T09:11:40.398183Z","steps":["trace[274750649] 'process raft request'  (duration: 235.314458ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.398259Z","caller":"traceutil/trace.go:172","msg":"trace[38279491] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"277.364846ms","start":"2025-11-08T09:11:40.120876Z","end":"2025-11-08T09:11:40.398241Z","steps":["trace[38279491] 'process raft request'  (duration: 180.609095ms)","trace[38279491] 'compare'  (duration: 96.537569ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:12:06 up 54 min,  0 user,  load average: 3.58, 2.95, 1.79
	Linux pause-322482 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5] <==
	I1108 09:11:43.938320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:11:43.940385       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:11:43.940538       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:11:43.940554       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:11:43.940567       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:11:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:11:44.138036       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:11:44.138065       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:11:44.138077       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:11:44.138226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:11:44.534522       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:11:44.534567       1 metrics.go:72] Registering metrics
	I1108 09:11:44.534644       1 controller.go:711] "Syncing nftables rules"
	I1108 09:11:54.138196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:11:54.138254       1 main.go:301] handling current node
	I1108 09:12:04.138145       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:12:04.138176       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c] <==
	E1108 09:11:35.283183       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1108 09:11:35.306092       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1108 09:11:35.330233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:11:35.335873       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:35.335883       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:11:35.342072       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:35.343162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:11:35.510613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:11:36.133757       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:11:36.138118       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:11:36.138141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:11:36.632878       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:11:36.669536       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:11:36.737887       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:11:36.745526       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:11:36.746617       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:11:36.750911       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:11:37.168844       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:11:37.999367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:11:38.009738       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:11:38.016905       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:11:42.772397       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:42.775955       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:42.920890       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:11:43.272088       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87] <==
	I1108 09:11:42.138651       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:42.144905       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:11:42.144990       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:11:42.145049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:11:42.145061       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:11:42.145068       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:11:42.151157       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:11:42.152345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:11:42.153594       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-322482" podCIDRs=["10.244.0.0/24"]
	I1108 09:11:42.155723       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:11:42.156919       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:11:42.167503       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:11:42.167901       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 09:11:42.168013       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:11:42.168028       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:11:42.169215       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:11:42.169234       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:11:42.169241       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:11:42.169304       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:11:42.170328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:11:42.171549       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:11:42.171650       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:11:42.172938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:42.188820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:11:57.118250       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5] <==
	I1108 09:11:43.691255       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:11:43.759005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:11:43.859432       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:11:43.859462       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:11:43.859566       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:11:43.880206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:11:43.880324       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:11:43.886361       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:11:43.886808       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:11:43.886843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:11:43.888572       1 config.go:309] "Starting node config controller"
	I1108 09:11:43.888589       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:11:43.888597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:11:43.888750       1 config.go:200] "Starting service config controller"
	I1108 09:11:43.888769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:11:43.888772       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:11:43.888779       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:11:43.888816       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:11:43.888822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:11:43.988837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:11:43.988922       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:11:43.988971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730] <==
	E1108 09:11:35.193738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:11:35.193751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:11:35.193777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:11:35.193816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:11:35.193823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:11:35.193895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:11:35.193910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:11:35.194121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:11:35.194452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:11:36.004055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:11:36.074488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:11:36.089080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:11:36.097561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:11:36.127029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:11:36.195846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:11:36.250242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:11:36.263601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:11:36.265501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:11:36.276755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:11:36.346357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:11:36.370995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:11:36.414619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:11:36.449711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:11:36.465774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1108 09:11:39.287614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:11:39 pause-322482 kubelet[1327]: E1108 09:11:39.237682    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-322482\" already exists" pod="kube-system/kube-scheduler-pause-322482"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.623599    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-322482" podStartSLOduration=2.623575257 podStartE2EDuration="2.623575257s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.62351917 +0000 UTC m=+1.845903161" watchObservedRunningTime="2025-11-08 09:11:39.623575257 +0000 UTC m=+1.845959254"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.623770    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-322482" podStartSLOduration=2.623756124 podStartE2EDuration="2.623756124s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.237777106 +0000 UTC m=+1.460161102" watchObservedRunningTime="2025-11-08 09:11:39.623756124 +0000 UTC m=+1.846140122"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.716342    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-322482" podStartSLOduration=2.716318963 podStartE2EDuration="2.716318963s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.716270544 +0000 UTC m=+1.938654540" watchObservedRunningTime="2025-11-08 09:11:39.716318963 +0000 UTC m=+1.938702963"
	Nov 08 09:11:40 pause-322482 kubelet[1327]: I1108 09:11:40.112254    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-322482" podStartSLOduration=3.112231013 podStartE2EDuration="3.112231013s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.93932765 +0000 UTC m=+2.161711646" watchObservedRunningTime="2025-11-08 09:11:40.112231013 +0000 UTC m=+2.334615008"
	Nov 08 09:11:42 pause-322482 kubelet[1327]: I1108 09:11:42.192666    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:11:42 pause-322482 kubelet[1327]: I1108 09:11:42.193264    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.389865    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h6n\" (UniqueName: \"kubernetes.io/projected/d014725c-c216-4b28-8694-a753f2d87b87-kube-api-access-62h6n\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.389928    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-cni-cfg\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390027    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-xtables-lock\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390097    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e9e8a05-5439-48fb-9217-e23f242c9789-kube-proxy\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390124    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9e8a05-5439-48fb-9217-e23f242c9789-lib-modules\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390144    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvt8\" (UniqueName: \"kubernetes.io/projected/3e9e8a05-5439-48fb-9217-e23f242c9789-kube-api-access-fpvt8\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390172    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-lib-modules\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390201    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9e8a05-5439-48fb-9217-e23f242c9789-xtables-lock\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.932454    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tst5j" podStartSLOduration=0.932409853 podStartE2EDuration="932.409853ms" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:43.910047025 +0000 UTC m=+6.132431022" watchObservedRunningTime="2025-11-08 09:11:43.932409853 +0000 UTC m=+6.154793849"
	Nov 08 09:11:47 pause-322482 kubelet[1327]: I1108 09:11:47.488223    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tbffl" podStartSLOduration=4.488198252 podStartE2EDuration="4.488198252s" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:43.935744098 +0000 UTC m=+6.158128095" watchObservedRunningTime="2025-11-08 09:11:47.488198252 +0000 UTC m=+9.710582251"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.255836    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.376591    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e-config-volume\") pod \"coredns-66bc5c9577-8h2lz\" (UID: \"551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e\") " pod="kube-system/coredns-66bc5c9577-8h2lz"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.376649    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjkc\" (UniqueName: \"kubernetes.io/projected/551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e-kube-api-access-gwjkc\") pod \"coredns-66bc5c9577-8h2lz\" (UID: \"551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e\") " pod="kube-system/coredns-66bc5c9577-8h2lz"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.936104    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8h2lz" podStartSLOduration=11.936081575 podStartE2EDuration="11.936081575s" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:54.936083046 +0000 UTC m=+17.158467043" watchObservedRunningTime="2025-11-08 09:11:54.936081575 +0000 UTC m=+17.158465571"
	Nov 08 09:12:03 pause-322482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:12:03 pause-322482 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:12:03 pause-322482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:12:03 pause-322482 systemd[1]: kubelet.service: Consumed 1.138s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-322482 -n pause-322482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-322482 -n pause-322482: exit status 2 (323.472733ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-322482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestPause/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestPause/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect pause-322482
helpers_test.go:243: (dbg) docker inspect pause-322482:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3",
	        "Created": "2025-11-08T09:11:21.921111224Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 222105,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:11:21.961233448Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/hostname",
	        "HostsPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/hosts",
	        "LogPath": "/var/lib/docker/containers/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3/643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3-json.log",
	        "Name": "/pause-322482",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-322482:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "pause-322482",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "643d6aaa38a559bddfc0267463f3f48aa025950a0cc28635364ef49c5f816dd3",
	                "LowerDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e1713aa45c5f1d186e031ec3f2f96ddc691d45b777e64dc7aa2fb0a38f9bc644/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-322482",
	                "Source": "/var/lib/docker/volumes/pause-322482/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-322482",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-322482",
	                "name.minikube.sigs.k8s.io": "pause-322482",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7668ea48e8babf7c5afcc2b316d7cd81e38ab3b4fe1a33ea1d3d3f2a39f666fb",
	            "SandboxKey": "/var/run/docker/netns/7668ea48e8ba",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33049"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33050"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33053"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33051"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33052"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-322482": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "2a:39:dd:35:ba:19",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "4f49ae350ddc4c163fff2777330ce0190d365ddf7c80549d6d0ce21ec674b83b",
	                    "EndpointID": "a9344bed91f7952472c8d9ac24ee8a71b77a0dc9ca5e40c793ef647e146739b8",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "pause-322482",
	                        "643d6aaa38a5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-322482 -n pause-322482
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-322482 -n pause-322482: exit status 2 (318.443108ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestPause/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestPause/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p pause-322482 logs -n 25
helpers_test.go:260: TestPause/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬───────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                                           ARGS                                                                                                            │          PROFILE          │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼───────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -p NoKubernetes-845504 --driver=docker  --container-runtime=crio                                                                                                                                                          │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p force-systemd-env-004778                                                                                                                                                                                               │ force-systemd-env-004778  │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p cert-expiration-640168 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio                                                                                                                    │ cert-expiration-640168    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ -p NoKubernetes-845504 sudo systemctl is-active --quiet service kubelet                                                                                                                                                   │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │                     │
	│ delete  │ -p NoKubernetes-845504                                                                                                                                                                                                    │ NoKubernetes-845504       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p cert-options-763535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p offline-crio-798164                                                                                                                                                                                                    │ offline-crio-798164       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p running-upgrade-784389 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ running-upgrade-784389    │ jenkins │ v1.32.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ delete  │ -p missing-upgrade-811715                                                                                                                                                                                                 │ missing-upgrade-811715    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ ssh     │ cert-options-763535 ssh openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt                                                                                                                               │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ ssh     │ -p cert-options-763535 -- sudo cat /etc/kubernetes/admin.conf                                                                                                                                                             │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p running-upgrade-784389 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ running-upgrade-784389    │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ delete  │ -p cert-options-763535                                                                                                                                                                                                    │ cert-options-763535       │ jenkins │ v1.37.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:10 UTC │
	│ start   │ -p stopped-upgrade-312782 --memory=3072 --vm-driver=docker  --container-runtime=crio                                                                                                                                      │ stopped-upgrade-312782    │ jenkins │ v1.32.0 │ 08 Nov 25 09:10 UTC │ 08 Nov 25 09:11 UTC │
	│ stop    │ -p kubernetes-upgrade-515251                                                                                                                                                                                              │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                  │ kubernetes-upgrade-515251 │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ delete  │ -p running-upgrade-784389                                                                                                                                                                                                 │ running-upgrade-784389    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ stop    │ stopped-upgrade-312782 stop                                                                                                                                                                                               │ stopped-upgrade-312782    │ jenkins │ v1.32.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p pause-322482 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio                                                                                                                 │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p stopped-upgrade-312782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                  │ stopped-upgrade-312782    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ delete  │ -p stopped-upgrade-312782                                                                                                                                                                                                 │ stopped-upgrade-312782    │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:11 UTC │
	│ start   │ -p auto-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio                                                                                                   │ auto-732849               │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │                     │
	│ start   │ -p pause-322482 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio                                                                                                                                          │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:11 UTC │ 08 Nov 25 09:12 UTC │
	│ pause   │ -p pause-322482 --alsologtostderr -v=5                                                                                                                                                                                    │ pause-322482              │ jenkins │ v1.37.0 │ 08 Nov 25 09:12 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴───────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:11:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:11:56.878005  228538 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:11:56.878294  228538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:56.878305  228538 out.go:374] Setting ErrFile to fd 2...
	I1108 09:11:56.878311  228538 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:11:56.878523  228538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:11:56.878966  228538 out.go:368] Setting JSON to false
	I1108 09:11:56.880136  228538 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3268,"bootTime":1762589849,"procs":309,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:11:56.880230  228538 start.go:143] virtualization: kvm guest
	I1108 09:11:56.882466  228538 out.go:179] * [pause-322482] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:11:56.883809  228538 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:11:56.883822  228538 notify.go:221] Checking for updates...
	I1108 09:11:56.885955  228538 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:11:56.887479  228538 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:11:56.889338  228538 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:11:56.890657  228538 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:11:56.891872  228538 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:11:53.872357  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:11:53.872408  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:11:56.893403  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:56.893964  228538 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:11:56.917997  228538 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:11:56.918076  228538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:11:56.977423  228538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:11:56.967023195 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:11:56.977524  228538 docker.go:319] overlay module found
	I1108 09:11:56.979161  228538 out.go:179] * Using the docker driver based on existing profile
	I1108 09:11:56.980340  228538 start.go:309] selected driver: docker
	I1108 09:11:56.980356  228538 start.go:930] validating driver "docker" against &{Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikub
eCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false regi
stry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:11:56.980460  228538 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:11:56.980534  228538 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:11:57.037312  228538 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:79 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:11:57.02659194 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[ma
p[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:11:57.038004  228538 cni.go:84] Creating CNI manager for ""
	I1108 09:11:57.038063  228538 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:11:57.038129  228538 start.go:353] cluster config:
	{Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false
storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:11:57.040212  228538 out.go:179] * Starting "pause-322482" primary control-plane node in "pause-322482" cluster
	I1108 09:11:57.041462  228538 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:11:57.042525  228538 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:11:57.043593  228538 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:11:57.043620  228538 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:11:57.043636  228538 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:11:57.043644  228538 cache.go:59] Caching tarball of preloaded images
	I1108 09:11:57.043794  228538 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:11:57.043818  228538 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:11:57.043995  228538 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/config.json ...
	I1108 09:11:57.065333  228538 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:11:57.065352  228538 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:11:57.065369  228538 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:11:57.065397  228538 start.go:360] acquireMachinesLock for pause-322482: {Name:mkbc3c6e2e0d0256e50f18ec85a056408e079d9e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:11:57.065462  228538 start.go:364] duration metric: took 43.477µs to acquireMachinesLock for "pause-322482"
	I1108 09:11:57.065484  228538 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:11:57.065493  228538 fix.go:54] fixHost starting: 
	I1108 09:11:57.065687  228538 cli_runner.go:164] Run: docker container inspect pause-322482 --format={{.State.Status}}
	I1108 09:11:57.084335  228538 fix.go:112] recreateIfNeeded on pause-322482: state=Running err=<nil>
	W1108 09:11:57.084363  228538 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:11:57.086304  228538 out.go:252] * Updating the running docker "pause-322482" container ...
	I1108 09:11:57.086335  228538 machine.go:94] provisionDockerMachine start ...
	I1108 09:11:57.086392  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.104859  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.105175  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.105194  228538 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:11:57.233458  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-322482
	
	I1108 09:11:57.233492  228538 ubuntu.go:182] provisioning hostname "pause-322482"
	I1108 09:11:57.233541  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.253115  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.253402  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.253423  228538 main.go:143] libmachine: About to run SSH command:
	sudo hostname pause-322482 && echo "pause-322482" | sudo tee /etc/hostname
	I1108 09:11:57.393684  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: pause-322482
	
	I1108 09:11:57.393761  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.412390  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:57.412600  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:57.412616  228538 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-322482' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-322482/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-322482' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:11:57.544212  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:11:57.544243  228538 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:11:57.544267  228538 ubuntu.go:190] setting up certificates
	I1108 09:11:57.544292  228538 provision.go:84] configureAuth start
	I1108 09:11:57.544358  228538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-322482
	I1108 09:11:57.562399  228538 provision.go:143] copyHostCerts
	I1108 09:11:57.562458  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:11:57.562471  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:11:57.562542  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:11:57.562697  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:11:57.562708  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:11:57.562740  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:11:57.562810  228538 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:11:57.562818  228538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:11:57.562841  228538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:11:57.562916  228538 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.pause-322482 san=[127.0.0.1 192.168.76.2 localhost minikube pause-322482]
	I1108 09:11:57.936476  228538 provision.go:177] copyRemoteCerts
	I1108 09:11:57.936529  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:11:57.936560  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:57.955915  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.050418  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:11:58.067944  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1108 09:11:58.085437  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:11:58.103233  228538 provision.go:87] duration metric: took 558.926995ms to configureAuth
	I1108 09:11:58.103265  228538 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:11:58.103483  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:11:58.103567  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.122343  228538 main.go:143] libmachine: Using SSH client type: native
	I1108 09:11:58.122555  228538 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33049 <nil> <nil>}
	I1108 09:11:58.122574  228538 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:11:58.424113  228538 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:11:58.424140  228538 machine.go:97] duration metric: took 1.337797414s to provisionDockerMachine
	I1108 09:11:58.424154  228538 start.go:293] postStartSetup for "pause-322482" (driver="docker")
	I1108 09:11:58.424167  228538 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:11:58.424223  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:11:58.424262  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.444616  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.540186  228538 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:11:58.543820  228538 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:11:58.543846  228538 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:11:58.543856  228538 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:11:58.543915  228538 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:11:58.543983  228538 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:11:58.544071  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:11:58.551928  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:11:58.570473  228538 start.go:296] duration metric: took 146.288877ms for postStartSetup
	I1108 09:11:58.570556  228538 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:11:58.570602  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.588892  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.681611  228538 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:11:58.686572  228538 fix.go:56] duration metric: took 1.621069575s for fixHost
	I1108 09:11:58.686604  228538 start.go:83] releasing machines lock for "pause-322482", held for 1.621129312s
	I1108 09:11:58.686681  228538 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" pause-322482
	I1108 09:11:58.705884  228538 ssh_runner.go:195] Run: cat /version.json
	I1108 09:11:58.705927  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.705987  228538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:11:58.706056  228538 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" pause-322482
	I1108 09:11:58.725944  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.726403  228538 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33049 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/pause-322482/id_rsa Username:docker}
	I1108 09:11:58.818658  228538 ssh_runner.go:195] Run: systemctl --version
	I1108 09:11:58.874116  228538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:11:58.910675  228538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:11:58.915627  228538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:11:58.915700  228538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:11:58.923932  228538 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:11:58.923957  228538 start.go:496] detecting cgroup driver to use...
	I1108 09:11:58.923985  228538 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:11:58.924022  228538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:11:58.938847  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:11:58.952674  228538 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:11:58.952728  228538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:11:58.967899  228538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:11:58.980315  228538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:11:59.086389  228538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:11:59.196469  228538 docker.go:234] disabling docker service ...
	I1108 09:11:59.196542  228538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:11:59.211704  228538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:11:59.224646  228538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:11:59.341406  228538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:11:59.464081  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:11:59.476977  228538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:11:59.491411  228538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:11:59.491468  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.502447  228538 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:11:59.502513  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.512263  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.521087  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.530298  228538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:11:59.538743  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.548993  228538 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.563085  228538 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:11:59.576086  228538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:11:59.583556  228538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:11:59.591206  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:11:59.701644  228538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:11:59.861494  228538 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:11:59.861556  228538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:11:59.866203  228538 start.go:564] Will wait 60s for crictl version
	I1108 09:11:59.866268  228538 ssh_runner.go:195] Run: which crictl
	I1108 09:11:59.870151  228538 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:11:59.895324  228538 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:11:59.895417  228538 ssh_runner.go:195] Run: crio --version
	I1108 09:11:59.923560  228538 ssh_runner.go:195] Run: crio --version
	I1108 09:11:59.954199  228538 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:11:55.452264  225578 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:11:55.456573  225578 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:11:55.456588  225578 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:11:55.469659  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:11:55.675820  225578 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:11:55.675916  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:55.675963  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes auto-732849 minikube.k8s.io/updated_at=2025_11_08T09_11_55_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=auto-732849 minikube.k8s.io/primary=true
	I1108 09:11:55.758668  225578 ops.go:34] apiserver oom_adj: -16
	I1108 09:11:55.758675  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:56.258795  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:56.759487  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:57.258899  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:57.759466  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:58.259307  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:58.759429  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:59.258823  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:11:59.758791  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.259498  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.759383  225578 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:12:00.825466  225578 kubeadm.go:1114] duration metric: took 5.149606335s to wait for elevateKubeSystemPrivileges
	I1108 09:12:00.825505  225578 kubeadm.go:403] duration metric: took 14.795722819s to StartCluster
	I1108 09:12:00.825528  225578 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.825597  225578 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:12:00.827063  225578 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.827336  225578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:12:00.827375  225578 start.go:236] Will wait 15m0s for node &{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:12:00.827432  225578 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:12:00.827516  225578 addons.go:70] Setting storage-provisioner=true in profile "auto-732849"
	I1108 09:12:00.827533  225578 addons.go:239] Setting addon storage-provisioner=true in "auto-732849"
	I1108 09:12:00.827533  225578 addons.go:70] Setting default-storageclass=true in profile "auto-732849"
	I1108 09:12:00.827562  225578 host.go:66] Checking if "auto-732849" exists ...
	I1108 09:12:00.827613  225578 config.go:182] Loaded profile config "auto-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:00.827563  225578 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "auto-732849"
	I1108 09:12:00.828035  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.828131  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.829206  225578 out.go:179] * Verifying Kubernetes components...
	I1108 09:12:00.830519  225578 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:00.854029  225578 addons.go:239] Setting addon default-storageclass=true in "auto-732849"
	I1108 09:12:00.854067  225578 host.go:66] Checking if "auto-732849" exists ...
	I1108 09:12:00.854541  225578 cli_runner.go:164] Run: docker container inspect auto-732849 --format={{.State.Status}}
	I1108 09:12:00.858426  225578 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:11:59.955533  228538 cli_runner.go:164] Run: docker network inspect pause-322482 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:11:59.974511  228538 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:11:59.978865  228538 kubeadm.go:884] updating cluster {Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false regist
ry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:11:59.979015  228538 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:11:59.979089  228538 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:12:00.010809  228538 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:12:00.010831  228538 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:12:00.010876  228538 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:12:00.035950  228538 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:12:00.035972  228538 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:12:00.035979  228538 kubeadm.go:935] updating node { 192.168.76.2 8443 v1.34.1 crio true true} ...
	I1108 09:12:00.036085  228538 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=pause-322482 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:12:00.036145  228538 ssh_runner.go:195] Run: crio config
	I1108 09:12:00.080910  228538 cni.go:84] Creating CNI manager for ""
	I1108 09:12:00.080931  228538 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:12:00.080953  228538 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:12:00.080975  228538 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-322482 NodeName:pause-322482 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:12:00.081100  228538 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-322482"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:12:00.081158  228538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:12:00.089618  228538 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:12:00.089684  228538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:12:00.097718  228538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (362 bytes)
	I1108 09:12:00.109915  228538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:12:00.122650  228538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2208 bytes)
	I1108 09:12:00.135008  228538 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:12:00.138856  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:00.258794  228538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:00.272718  228538 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482 for IP: 192.168.76.2
	I1108 09:12:00.272745  228538 certs.go:195] generating shared ca certs ...
	I1108 09:12:00.272766  228538 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:00.272927  228538 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:12:00.273000  228538 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:12:00.273018  228538 certs.go:257] generating profile certs ...
	I1108 09:12:00.273138  228538 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key
	I1108 09:12:00.273226  228538 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.key.9467e21f
	I1108 09:12:00.273351  228538 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.key
	I1108 09:12:00.273507  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:12:00.273549  228538 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:12:00.273574  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:12:00.273607  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:12:00.273638  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:12:00.273667  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:12:00.273723  228538 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:12:00.274593  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:12:00.294400  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:12:00.314468  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:12:00.334229  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:12:00.355920  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1108 09:12:00.375812  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:12:00.394208  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:12:00.412179  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1108 09:12:00.429490  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:12:00.449930  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:12:00.469434  228538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:12:00.487211  228538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:12:00.500455  228538 ssh_runner.go:195] Run: openssl version
	I1108 09:12:00.506659  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:12:00.514896  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.518674  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.518725  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:12:00.556493  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:12:00.564876  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:12:00.573525  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.577402  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.577457  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:12:00.622970  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:12:00.632402  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:12:00.642048  228538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.646205  228538 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.646263  228538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:12:00.684909  228538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:12:00.694519  228538 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:12:00.699212  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:12:00.743700  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:12:00.781190  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:12:00.820052  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:12:00.874190  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:12:00.929460  228538 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:12:00.988614  228538 kubeadm.go:401] StartCluster: {Name:pause-322482 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:pause-322482 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[
] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-
aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:12:00.988807  228538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:12:00.988882  228538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:12:01.029272  228538 cri.go:89] found id: "895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af"
	I1108 09:12:01.029353  228538 cri.go:89] found id: "91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5"
	I1108 09:12:01.029360  228538 cri.go:89] found id: "a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5"
	I1108 09:12:01.029366  228538 cri.go:89] found id: "1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c"
	I1108 09:12:01.029379  228538 cri.go:89] found id: "38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87"
	I1108 09:12:01.029384  228538 cri.go:89] found id: "3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb"
	I1108 09:12:01.029389  228538 cri.go:89] found id: "353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730"
	I1108 09:12:01.029394  228538 cri.go:89] found id: ""
	I1108 09:12:01.029449  228538 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:12:01.045874  228538 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:12:01Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:12:01.045974  228538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:12:01.057573  228538 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:12:01.057613  228538 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:12:01.057662  228538 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:12:01.067605  228538 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:12:01.068704  228538 kubeconfig.go:125] found "pause-322482" server: "https://192.168.76.2:8443"
	I1108 09:12:01.070505  228538 kapi.go:59] client config for pause-322482: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:12:01.071097  228538 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I1108 09:12:01.071117  228538 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I1108 09:12:01.071124  228538 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I1108 09:12:01.071130  228538 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
	I1108 09:12:01.071136  228538 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I1108 09:12:01.071546  228538 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:12:01.082540  228538 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:12:01.082568  228538 kubeadm.go:602] duration metric: took 24.949444ms to restartPrimaryControlPlane
	I1108 09:12:01.082576  228538 kubeadm.go:403] duration metric: took 93.973541ms to StartCluster
	I1108 09:12:01.082589  228538 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:01.082651  228538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:12:01.083919  228538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:12:01.084226  228538 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:12:01.084312  228538 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:12:01.084502  228538 config.go:182] Loaded profile config "pause-322482": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:12:01.087510  228538 out.go:179] * Enabled addons: 
	I1108 09:12:01.087650  228538 out.go:179] * Verifying Kubernetes components...
	I1108 09:12:00.859995  225578 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:12:00.860017  225578 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:12:00.860123  225578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-732849
	I1108 09:12:00.884862  225578 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:12:00.884890  225578 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:12:00.884955  225578 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-732849
	I1108 09:12:00.899351  225578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/auto-732849/id_rsa Username:docker}
	I1108 09:12:00.908596  225578 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33054 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/auto-732849/id_rsa Username:docker}
	I1108 09:12:00.922160  225578 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:12:00.991042  225578 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:01.017211  225578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:12:01.021424  225578 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:12:01.136552  225578 node_ready.go:35] waiting up to 15m0s for node "auto-732849" to be "Ready" ...
	I1108 09:12:01.139546  225578 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:12:01.356230  225578 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:12:01.089215  228538 addons.go:515] duration metric: took 4.903692ms for enable addons: enabled=[]
	I1108 09:12:01.089254  228538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:12:01.232193  228538 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:12:01.245760  228538 node_ready.go:35] waiting up to 6m0s for node "pause-322482" to be "Ready" ...
	I1108 09:12:01.254218  228538 node_ready.go:49] node "pause-322482" is "Ready"
	I1108 09:12:01.254250  228538 node_ready.go:38] duration metric: took 8.460477ms for node "pause-322482" to be "Ready" ...
	I1108 09:12:01.254265  228538 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:12:01.254344  228538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:12:01.266443  228538 api_server.go:72] duration metric: took 182.179786ms to wait for apiserver process to appear ...
	I1108 09:12:01.266470  228538 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:12:01.266489  228538 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8443/healthz ...
	I1108 09:12:01.270569  228538 api_server.go:279] https://192.168.76.2:8443/healthz returned 200:
	ok
	I1108 09:12:01.271524  228538 api_server.go:141] control plane version: v1.34.1
	I1108 09:12:01.271546  228538 api_server.go:131] duration metric: took 5.070437ms to wait for apiserver health ...
	I1108 09:12:01.271553  228538 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:12:01.274966  228538 system_pods.go:59] 7 kube-system pods found
	I1108 09:12:01.274995  228538 system_pods.go:61] "coredns-66bc5c9577-8h2lz" [551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e] Running
	I1108 09:12:01.275000  228538 system_pods.go:61] "etcd-pause-322482" [047c2f99-089f-4c34-b846-5d78c12d0655] Running
	I1108 09:12:01.275005  228538 system_pods.go:61] "kindnet-tst5j" [d014725c-c216-4b28-8694-a753f2d87b87] Running
	I1108 09:12:01.275011  228538 system_pods.go:61] "kube-apiserver-pause-322482" [0b42403c-e150-4405-923c-7a7c6cba26d9] Running
	I1108 09:12:01.275017  228538 system_pods.go:61] "kube-controller-manager-pause-322482" [bbd64ee3-f85a-486a-b3a7-1d66cdc9a947] Running
	I1108 09:12:01.275023  228538 system_pods.go:61] "kube-proxy-tbffl" [3e9e8a05-5439-48fb-9217-e23f242c9789] Running
	I1108 09:12:01.275031  228538 system_pods.go:61] "kube-scheduler-pause-322482" [ac7ac59d-b3ba-4179-ba5b-e8d04e54d1c9] Running
	I1108 09:12:01.275039  228538 system_pods.go:74] duration metric: took 3.479026ms to wait for pod list to return data ...
	I1108 09:12:01.275052  228538 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:12:01.276984  228538 default_sa.go:45] found service account: "default"
	I1108 09:12:01.277000  228538 default_sa.go:55] duration metric: took 1.943285ms for default service account to be created ...
	I1108 09:12:01.277008  228538 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:12:01.279701  228538 system_pods.go:86] 7 kube-system pods found
	I1108 09:12:01.279723  228538 system_pods.go:89] "coredns-66bc5c9577-8h2lz" [551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e] Running
	I1108 09:12:01.279728  228538 system_pods.go:89] "etcd-pause-322482" [047c2f99-089f-4c34-b846-5d78c12d0655] Running
	I1108 09:12:01.279732  228538 system_pods.go:89] "kindnet-tst5j" [d014725c-c216-4b28-8694-a753f2d87b87] Running
	I1108 09:12:01.279735  228538 system_pods.go:89] "kube-apiserver-pause-322482" [0b42403c-e150-4405-923c-7a7c6cba26d9] Running
	I1108 09:12:01.279739  228538 system_pods.go:89] "kube-controller-manager-pause-322482" [bbd64ee3-f85a-486a-b3a7-1d66cdc9a947] Running
	I1108 09:12:01.279743  228538 system_pods.go:89] "kube-proxy-tbffl" [3e9e8a05-5439-48fb-9217-e23f242c9789] Running
	I1108 09:12:01.279748  228538 system_pods.go:89] "kube-scheduler-pause-322482" [ac7ac59d-b3ba-4179-ba5b-e8d04e54d1c9] Running
	I1108 09:12:01.279755  228538 system_pods.go:126] duration metric: took 2.742856ms to wait for k8s-apps to be running ...
	I1108 09:12:01.279767  228538 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:12:01.279810  228538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:12:01.294135  228538 system_svc.go:56] duration metric: took 14.35867ms WaitForService to wait for kubelet
	I1108 09:12:01.294166  228538 kubeadm.go:587] duration metric: took 209.906788ms to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:12:01.294187  228538 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:12:01.297359  228538 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:12:01.297384  228538 node_conditions.go:123] node cpu capacity is 8
	I1108 09:12:01.297395  228538 node_conditions.go:105] duration metric: took 3.202379ms to run NodePressure ...
	I1108 09:12:01.297405  228538 start.go:242] waiting for startup goroutines ...
	I1108 09:12:01.297414  228538 start.go:247] waiting for cluster config update ...
	I1108 09:12:01.297424  228538 start.go:256] writing updated cluster config ...
	I1108 09:12:01.297752  228538 ssh_runner.go:195] Run: rm -f paused
	I1108 09:12:01.301899  228538 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:12:01.302924  228538 kapi.go:59] client config for pause-322482: &rest.Config{Host:"https://192.168.76.2:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.crt", KeyFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/profiles/pause-322482/client.key", CAFile:"/home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(
nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x28254c0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), WarningHandlerWithContext:rest.WarningHandlerWithContext(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I1108 09:12:01.306172  228538 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-8h2lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.310953  228538 pod_ready.go:94] pod "coredns-66bc5c9577-8h2lz" is "Ready"
	I1108 09:12:01.310982  228538 pod_ready.go:86] duration metric: took 4.787483ms for pod "coredns-66bc5c9577-8h2lz" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.313167  228538 pod_ready.go:83] waiting for pod "etcd-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.318192  228538 pod_ready.go:94] pod "etcd-pause-322482" is "Ready"
	I1108 09:12:01.318225  228538 pod_ready.go:86] duration metric: took 5.038568ms for pod "etcd-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.320913  228538 pod_ready.go:83] waiting for pod "kube-apiserver-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.326839  228538 pod_ready.go:94] pod "kube-apiserver-pause-322482" is "Ready"
	I1108 09:12:01.326864  228538 pod_ready.go:86] duration metric: took 5.927135ms for pod "kube-apiserver-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.329399  228538 pod_ready.go:83] waiting for pod "kube-controller-manager-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:01.706742  228538 pod_ready.go:94] pod "kube-controller-manager-pause-322482" is "Ready"
	I1108 09:12:01.706780  228538 pod_ready.go:86] duration metric: took 377.350138ms for pod "kube-controller-manager-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:11:58.874077  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:11:58.874126  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:01.906797  228538 pod_ready.go:83] waiting for pod "kube-proxy-tbffl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.306596  228538 pod_ready.go:94] pod "kube-proxy-tbffl" is "Ready"
	I1108 09:12:02.306622  228538 pod_ready.go:86] duration metric: took 399.800524ms for pod "kube-proxy-tbffl" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.506468  228538 pod_ready.go:83] waiting for pod "kube-scheduler-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.906791  228538 pod_ready.go:94] pod "kube-scheduler-pause-322482" is "Ready"
	I1108 09:12:02.906821  228538 pod_ready.go:86] duration metric: took 400.323338ms for pod "kube-scheduler-pause-322482" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:12:02.906835  228538 pod_ready.go:40] duration metric: took 1.604902811s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:12:02.947942  228538 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:12:02.949669  228538 out.go:179] * Done! kubectl is now configured to use "pause-322482" cluster and "default" namespace by default
	I1108 09:12:01.357181  225578 addons.go:515] duration metric: took 529.754115ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:12:01.643261  225578 kapi.go:214] "coredns" deployment in "kube-system" namespace and "auto-732849" context rescaled to 1 replicas
	W1108 09:12:03.139890  225578 node_ready.go:57] node "auto-732849" has "Ready":"False" status (will retry)
	I1108 09:12:03.875879  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I1108 09:12:03.875918  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:04.194029  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": read tcp 192.168.85.1:38376->192.168.85.2:8443: read: connection reset by peer
	I1108 09:12:04.369461  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:04.369922  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:12:04.869484  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:04.869827  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:12:05.369373  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:05.369757  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:12:05.869252  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:05.869720  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:12:06.369357  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:06.369771  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	I1108 09:12:06.869422  218337 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:12:06.869861  218337 api_server.go:269] stopped: https://192.168.85.2:8443/healthz: Get "https://192.168.85.2:8443/healthz": dial tcp 192.168.85.2:8443: connect: connection refused
	
	
	==> CRI-O <==
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.796340702Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797310034Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797331429Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.797344396Z" level=info msg="Using conmon executable: /usr/libexec/crio/conmon"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.798048934Z" level=info msg="Conmon does support the --sync option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.798064236Z" level=info msg="Conmon does support the --log-global-size-max option"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802092916Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802115051Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.802742378Z" level=info msg="Current CRI-O configuration:\n[crio]\n  root = \"/var/lib/containers/storage\"\n  runroot = \"/run/containers/storage\"\n  imagestore = \"\"\n  storage_driver = \"overlay\"\n  log_dir = \"/var/log/crio/pods\"\n  version_file = \"/var/run/crio/version\"\n  version_file_persist = \"\"\n  clean_shutdown_file = \"/var/lib/crio/clean.shutdown\"\n  internal_wipe = true\n  internal_repair = true\n  [crio.api]\n    grpc_max_send_msg_size = 83886080\n    grpc_max_recv_msg_size = 83886080\n    listen = \"/var/run/crio/crio.sock\"\n    stream_address = \"127.0.0.1\"\n    stream_port = \"0\"\n    stream_enable_tls = false\n    stream_tls_cert = \"\"\n    stream_tls_key = \"\"\n    stream_tls_ca = \"\"\n    stream_idle_timeout = \"\"\n  [crio.runtime]\n    no_pivot = false\n    selinux = false\n    log_to_journald = false\n    drop_infra_ctr = true\n    read_only = false\n    hooks_dir = [\"/usr/share/containers/oci/hoo
ks.d\"]\n    default_capabilities = [\"CHOWN\", \"DAC_OVERRIDE\", \"FSETID\", \"FOWNER\", \"SETGID\", \"SETUID\", \"SETPCAP\", \"NET_BIND_SERVICE\", \"KILL\"]\n    add_inheritable_capabilities = false\n    default_sysctls = [\"net.ipv4.ip_unprivileged_port_start=0\"]\n    allowed_devices = [\"/dev/fuse\", \"/dev/net/tun\"]\n    cdi_spec_dirs = [\"/etc/cdi\", \"/var/run/cdi\"]\n    device_ownership_from_security_context = false\n    default_runtime = \"crun\"\n    decryption_keys_path = \"/etc/crio/keys/\"\n    conmon = \"\"\n    conmon_cgroup = \"pod\"\n    seccomp_profile = \"\"\n    privileged_seccomp_profile = \"\"\n    apparmor_profile = \"crio-default\"\n    blockio_config_file = \"\"\n    blockio_reload = false\n    irqbalance_config_file = \"/etc/sysconfig/irqbalance\"\n    rdt_config_file = \"\"\n    cgroup_manager = \"systemd\"\n    default_mounts_file = \"\"\n    container_exits_dir = \"/var/run/crio/exits\"\n    container_attach_socket_dir = \"/var/run/crio\"\n    bind_mount_prefix = \"\"\n    uid_
mappings = \"\"\n    minimum_mappable_uid = -1\n    gid_mappings = \"\"\n    minimum_mappable_gid = -1\n    log_level = \"info\"\n    log_filter = \"\"\n    namespaces_dir = \"/var/run\"\n    pinns_path = \"/usr/bin/pinns\"\n    enable_criu_support = false\n    pids_limit = -1\n    log_size_max = -1\n    ctr_stop_timeout = 30\n    separate_pull_cgroup = \"\"\n    infra_ctr_cpuset = \"\"\n    shared_cpuset = \"\"\n    enable_pod_events = false\n    irqbalance_config_restore_file = \"/etc/sysconfig/orig_irq_banned_cpus\"\n    hostnetwork_disable_selinux = true\n    disable_hostport_mapping = false\n    timezone = \"\"\n    [crio.runtime.runtimes]\n      [crio.runtime.runtimes.crun]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/crun\"\n        runtime_type = \"\"\n        runtime_root = \"/run/crun\"\n        allowed_annotations = [\"io.containers.trace-syscall\"]\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory
= \"12MiB\"\n        no_sync_log = false\n      [crio.runtime.runtimes.runc]\n        runtime_config_path = \"\"\n        runtime_path = \"/usr/libexec/crio/runc\"\n        runtime_type = \"\"\n        runtime_root = \"/run/runc\"\n        monitor_path = \"/usr/libexec/crio/conmon\"\n        monitor_cgroup = \"pod\"\n        container_min_memory = \"12MiB\"\n        no_sync_log = false\n  [crio.image]\n    default_transport = \"docker://\"\n    global_auth_file = \"\"\n    pause_image = \"registry.k8s.io/pause:3.10.1\"\n    pause_image_auth_file = \"\"\n    pause_command = \"/pause\"\n    signature_policy = \"/etc/crio/policy.json\"\n    signature_policy_dir = \"/etc/crio/policies\"\n    image_volumes = \"mkdir\"\n    big_files_temporary_dir = \"\"\n    auto_reload_registries = false\n    pull_progress_timeout = \"0s\"\n    oci_artifact_mount_support = true\n    short_name_mode = \"enforcing\"\n  [crio.network]\n    cni_default_network = \"\"\n    network_dir = \"/etc/cni/net.d/\"\n    plugin_dirs = [\"/opt/
cni/bin/\"]\n  [crio.metrics]\n    enable_metrics = false\n    metrics_collectors = [\"image_pulls_layer_size\", \"containers_events_dropped_total\", \"containers_oom_total\", \"processes_defunct\", \"operations_total\", \"operations_latency_seconds\", \"operations_latency_seconds_total\", \"operations_errors_total\", \"image_pulls_bytes_total\", \"image_pulls_skipped_bytes_total\", \"image_pulls_failure_total\", \"image_pulls_success_total\", \"image_layer_reuse_total\", \"containers_oom_count_total\", \"containers_seccomp_notifier_count_total\", \"resources_stalled_at_stage\", \"containers_stopped_monitor_count\"]\n    metrics_host = \"127.0.0.1\"\n    metrics_port = 9090\n    metrics_socket = \"\"\n    metrics_cert = \"\"\n    metrics_key = \"\"\n  [crio.tracing]\n    enable_tracing = false\n    tracing_endpoint = \"127.0.0.1:4317\"\n    tracing_sampling_rate_per_million = 0\n  [crio.stats]\n    stats_collection_period = 0\n    collection_period = 0\n  [crio.nri]\n    enable_nri = true\n    nri_listen = \"
/var/run/nri/nri.sock\"\n    nri_plugin_dir = \"/opt/nri/plugins\"\n    nri_plugin_config_dir = \"/etc/nri/conf.d\"\n    nri_plugin_registration_timeout = \"5s\"\n    nri_plugin_request_timeout = \"2s\"\n    nri_disable_connections = false\n    [crio.nri.default_validator]\n      nri_enable_default_validator = false\n      nri_validator_reject_oci_hook_adjustment = false\n      nri_validator_reject_runtime_default_seccomp_adjustment = false\n      nri_validator_reject_unconfined_seccomp_adjustment = false\n      nri_validator_reject_custom_seccomp_adjustment = false\n      nri_validator_reject_namespace_adjustment = false\n      nri_validator_tolerate_missing_plugins_annotation = \"\"\n"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.803227046Z" level=info msg="Attempting to restore irqbalance config from /etc/sysconfig/orig_irq_banned_cpus"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.803302002Z" level=info msg="Restore irqbalance config: failed to get current CPU ban list, ignoring"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.809676427Z" level=info msg="No kernel support for IPv6: could not find nftables binary: exec: \"nft\": executable file not found in $PATH"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.85379259Z" level=info msg="Got pod network &{Name:coredns-66bc5c9577-8h2lz Namespace:kube-system ID:1411c273490ebf654cf4f5ddc0f1f416f77ee794e4a297aa12605712b4fe0b4d UID:551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e NetNS:/var/run/netns/fc51d3d5-fb6d-474d-a983-e6c444f88a6c Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc000520290}] Aliases:map[]}"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854032942Z" level=info msg="Checking pod kube-system_coredns-66bc5c9577-8h2lz for CNI network kindnet (type=ptp)"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854609324Z" level=info msg="Registered SIGHUP reload watcher"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854650701Z" level=info msg="Starting seccomp notifier watcher"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854703515Z" level=info msg="Create NRI interface"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854821233Z" level=info msg="built-in NRI default validator is disabled"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854837303Z" level=info msg="runtime interface created"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854852078Z" level=info msg="Registered domain \"k8s.io\" with NRI"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854860991Z" level=info msg="runtime interface starting up..."
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854868786Z" level=info msg="starting plugins..."
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.854885234Z" level=info msg="Synchronizing NRI (plugin) with current runtime state"
	Nov 08 09:11:59 pause-322482 crio[2157]: time="2025-11-08T09:11:59.855393616Z" level=info msg="No systemd watchdog enabled"
	Nov 08 09:11:59 pause-322482 systemd[1]: Started crio.service - Container Runtime Interface for OCI (CRI-O).
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                    NAMESPACE
	895c5f7f119d6       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969   13 seconds ago      Running             coredns                   0                   1411c273490eb       coredns-66bc5c9577-8h2lz               kube-system
	91979a7219cae       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   24 seconds ago      Running             kube-proxy                0                   0749049a1ab72       kube-proxy-tbffl                       kube-system
	a30c2c1d7897e       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   24 seconds ago      Running             kindnet-cni               0                   dfed9a62e530d       kindnet-tst5j                          kube-system
	1dc91bd79da78       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   34 seconds ago      Running             kube-apiserver            0                   e202d63cc2de1       kube-apiserver-pause-322482            kube-system
	38076341860d8       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   34 seconds ago      Running             kube-controller-manager   0                   9435ca79159a0       kube-controller-manager-pause-322482   kube-system
	3891b7663519a       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   34 seconds ago      Running             etcd                      0                   07cdf4a1cc6ff       etcd-pause-322482                      kube-system
	353c9ecf18e5b       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   34 seconds ago      Running             kube-scheduler            0                   0d2cd33f6bfdf       kube-scheduler-pause-322482            kube-system
	
	
	==> coredns [895c5f7f119d69cf9d478fcd81a9feb6fccdf35b795a911079f92c65eeeae4af] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:56180 - 22928 "HINFO IN 5198234154057578563.4147425357032121014. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.419215411s
	
	
	==> describe nodes <==
	Name:               pause-322482
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-322482
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=pause-322482
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_11_39_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:11:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-322482
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:11:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:11:54 +0000   Sat, 08 Nov 2025 09:11:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    pause-322482
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a3f8045d-1e00-45e0-945a-6624eab8a9bc
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-66bc5c9577-8h2lz                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-pause-322482                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         31s
	  kube-system                 kindnet-tst5j                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      25s
	  kube-system                 kube-apiserver-pause-322482             250m (3%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-controller-manager-pause-322482    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-tbffl                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-scheduler-pause-322482             100m (1%)     0 (0%)      0 (0%)           0 (0%)         31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 24s   kube-proxy       
	  Normal  Starting                 31s   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s   kubelet          Node pause-322482 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s   kubelet          Node pause-322482 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s   kubelet          Node pause-322482 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s   node-controller  Node pause-322482 event: Registered Node pause-322482 in Controller
	  Normal  NodeReady                14s   kubelet          Node pause-322482 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.084884] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +0.024045] systemd[1]: /lib/systemd/system/snapd.service:23: Unknown key name 'RestartMode' in section 'Service', ignoring.
	[  +7.205659] kauditd_printk_skb: 47 callbacks suppressed
	[Nov 8 08:31] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.054730] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023856] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000022] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023879] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023851] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +1.023894] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +2.047820] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +4.031573] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000026] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[  +8.127109] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[Nov 8 08:32] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000006] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	[ +32.252508] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000023] ll header: 00000000: 56 16 19 86 40 93 2a 09 eb bd fd 06 08 00
	
	
	==> etcd [3891b7663519ad844dcf865442d12a34a029ed99acb99909f05b92c9474b7adb] <==
	{"level":"info","ts":"2025-11-08T09:11:39.234740Z","caller":"traceutil/trace.go:172","msg":"trace[356604010] transaction","detail":"{read_only:false; number_of_response:0; response_revision:264; }","duration":"344.729876ms","start":"2025-11-08T09:11:38.890004Z","end":"2025-11-08T09:11:39.234734Z","steps":["trace[356604010] 'process raft request'  (duration: 344.556307ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.234780Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.889992Z","time spent":"344.767941ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/etcd-pause-322482\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/etcd-pause-322482\" value_size:4321 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.234583Z","caller":"traceutil/trace.go:172","msg":"trace[1252827889] transaction","detail":"{read_only:false; number_of_response:0; response_revision:264; }","duration":"344.602955ms","start":"2025-11-08T09:11:38.889970Z","end":"2025-11-08T09:11:39.234573Z","steps":["trace[1252827889] 'process raft request'  (duration: 344.55678ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.235116Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.889955Z","time spent":"345.111648ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":28,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-apiserver-pause-322482\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-apiserver-pause-322482\" value_size:6193 >> failure:<>"}
	{"level":"warn","ts":"2025-11-08T09:11:39.234869Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:38.908616Z","time spent":"326.048876ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4960,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" mod_revision:228 > success:<request_put:<key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" value_size:4898 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-scheduler-pause-322482\" > >"}
	{"level":"warn","ts":"2025-11-08T09:11:39.621699Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"256.658876ms","expected-duration":"100ms","prefix":"","request":"header:<ID:15638356504945783439 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" value_size:130 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:11:39.621940Z","caller":"traceutil/trace.go:172","msg":"trace[1715006353] transaction","detail":"{read_only:false; response_revision:266; number_of_response:1; }","duration":"382.483034ms","start":"2025-11-08T09:11:39.239435Z","end":"2025-11-08T09:11:39.621919Z","steps":["trace[1715006353] 'process raft request'  (duration: 125.54116ms)","trace[1715006353] 'compare'  (duration: 256.539984ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:11:39.622079Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.239421Z","time spent":"382.610755ms","remote":"127.0.0.1:55668","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":201,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/resource-claim-controller\" value_size:130 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.622221Z","caller":"traceutil/trace.go:172","msg":"trace[1300107591] transaction","detail":"{read_only:false; response_revision:268; number_of_response:1; }","duration":"379.118572ms","start":"2025-11-08T09:11:39.243091Z","end":"2025-11-08T09:11:39.622209Z","steps":["trace[1300107591] 'process raft request'  (duration: 379.034154ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.622335Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.243076Z","time spent":"379.188918ms","remote":"127.0.0.1:55620","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":7268,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" mod_revision:242 > success:<request_put:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" value_size:7197 >> failure:<request_range:<key:\"/registry/pods/kube-system/kube-controller-manager-pause-322482\" > >"}
	{"level":"info","ts":"2025-11-08T09:11:39.622339Z","caller":"traceutil/trace.go:172","msg":"trace[105450183] transaction","detail":"{read_only:false; response_revision:267; number_of_response:1; }","duration":"382.604224ms","start":"2025-11-08T09:11:39.239721Z","end":"2025-11-08T09:11:39.622325Z","steps":["trace[105450183] 'process raft request'  (duration: 382.063727ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:39.622408Z","caller":"v3rpc/interceptor.go:202","msg":"request stats","start time":"2025-11-08T09:11:39.239709Z","time spent":"382.65986ms","remote":"127.0.0.1:55976","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":844,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/clusterrolebindings/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/clusterrolebindings/kindnet\" value_size:799 >> failure:<>"}
	{"level":"info","ts":"2025-11-08T09:11:39.854580Z","caller":"traceutil/trace.go:172","msg":"trace[1659113502] transaction","detail":"{read_only:false; response_revision:272; number_of_response:1; }","duration":"132.902079ms","start":"2025-11-08T09:11:39.721659Z","end":"2025-11-08T09:11:39.854561Z","steps":["trace[1659113502] 'process raft request'  (duration: 132.390821ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:39.937425Z","caller":"traceutil/trace.go:172","msg":"trace[846374865] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"214.966676ms","start":"2025-11-08T09:11:39.722438Z","end":"2025-11-08T09:11:39.937404Z","steps":["trace[846374865] 'process raft request'  (duration: 214.883681ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:39.937449Z","caller":"traceutil/trace.go:172","msg":"trace[1752848594] transaction","detail":"{read_only:false; response_revision:273; number_of_response:1; }","duration":"215.187956ms","start":"2025-11-08T09:11:39.722242Z","end":"2025-11-08T09:11:39.937430Z","steps":["trace[1752848594] 'process raft request'  (duration: 214.959204ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:11:40.109611Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"100.777094ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/pause-322482\" limit:1 ","response":"range_response_count:1 size:4811"}
	{"level":"info","ts":"2025-11-08T09:11:40.109740Z","caller":"traceutil/trace.go:172","msg":"trace[1194401422] range","detail":"{range_begin:/registry/minions/pause-322482; range_end:; response_count:1; response_revision:274; }","duration":"100.911888ms","start":"2025-11-08T09:11:40.008809Z","end":"2025-11-08T09:11:40.109720Z","steps":["trace[1194401422] 'agreement among raft nodes before linearized reading'  (duration: 61.207936ms)","trace[1194401422] 'range keys from in-memory index tree'  (duration: 39.46836ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.109870Z","caller":"traceutil/trace.go:172","msg":"trace[1161015891] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"102.743464ms","start":"2025-11-08T09:11:40.007113Z","end":"2025-11-08T09:11:40.109856Z","steps":["trace[1161015891] 'process raft request'  (duration: 102.70609ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.110052Z","caller":"traceutil/trace.go:172","msg":"trace[1935287880] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"164.12605ms","start":"2025-11-08T09:11:39.945913Z","end":"2025-11-08T09:11:40.110039Z","steps":["trace[1935287880] 'process raft request'  (duration: 163.858778ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.110097Z","caller":"traceutil/trace.go:172","msg":"trace[650428140] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"167.02331ms","start":"2025-11-08T09:11:39.943057Z","end":"2025-11-08T09:11:40.110080Z","steps":["trace[650428140] 'process raft request'  (duration: 127.000767ms)","trace[650428140] 'compare'  (duration: 39.592815ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.301368Z","caller":"traceutil/trace.go:172","msg":"trace[1947465739] linearizableReadLoop","detail":"{readStateIndex:290; appliedIndex:290; }","duration":"121.014824ms","start":"2025-11-08T09:11:40.180332Z","end":"2025-11-08T09:11:40.301347Z","steps":["trace[1947465739] 'read index received'  (duration: 121.006361ms)","trace[1947465739] 'applied index is now lower than readState.Index'  (duration: 7.181µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:11:40.398055Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"217.697344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/default/default\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:11:40.398139Z","caller":"traceutil/trace.go:172","msg":"trace[775499133] range","detail":"{range_begin:/registry/serviceaccounts/default/default; range_end:; response_count:0; response_revision:278; }","duration":"217.800926ms","start":"2025-11-08T09:11:40.180321Z","end":"2025-11-08T09:11:40.398122Z","steps":["trace[775499133] 'agreement among raft nodes before linearized reading'  (duration: 121.096688ms)","trace[775499133] 'range keys from in-memory index tree'  (duration: 96.564012ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:11:40.398195Z","caller":"traceutil/trace.go:172","msg":"trace[274750649] transaction","detail":"{read_only:false; response_revision:280; number_of_response:1; }","duration":"235.35411ms","start":"2025-11-08T09:11:40.162829Z","end":"2025-11-08T09:11:40.398183Z","steps":["trace[274750649] 'process raft request'  (duration: 235.314458ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:11:40.398259Z","caller":"traceutil/trace.go:172","msg":"trace[38279491] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"277.364846ms","start":"2025-11-08T09:11:40.120876Z","end":"2025-11-08T09:11:40.398241Z","steps":["trace[38279491] 'process raft request'  (duration: 180.609095ms)","trace[38279491] 'compare'  (duration: 96.537569ms)"],"step_count":2}
	
	
	==> kernel <==
	 09:12:08 up 54 min,  0 user,  load average: 3.58, 2.95, 1.79
	Linux pause-322482 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [a30c2c1d7897ec4c24c496d7c9c2e0267d61c8afab639e0e8f543dc5346116a5] <==
	I1108 09:11:43.938320       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:11:43.940385       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:11:43.940538       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:11:43.940554       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:11:43.940567       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:11:44Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:11:44.138036       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:11:44.138065       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:11:44.138077       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:11:44.138226       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:11:44.534522       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:11:44.534567       1 metrics.go:72] Registering metrics
	I1108 09:11:44.534644       1 controller.go:711] "Syncing nftables rules"
	I1108 09:11:54.138196       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:11:54.138254       1 main.go:301] handling current node
	I1108 09:12:04.138145       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:12:04.138176       1 main.go:301] handling current node
	
	
	==> kube-apiserver [1dc91bd79da78aae91e942fb4af7f5b0c118b94288c3ca31ca80df12cac3b27c] <==
	E1108 09:11:35.283183       1 controller.go:148] "Unhandled Error" err="while syncing ConfigMap \"kube-system/kube-apiserver-legacy-service-account-token-tracking\", err: namespaces \"kube-system\" not found" logger="UnhandledError"
	E1108 09:11:35.306092       1 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-system\" not found" interval="200ms"
	I1108 09:11:35.330233       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:11:35.335873       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:35.335883       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:11:35.342072       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:35.343162       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:11:35.510613       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:11:36.133757       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:11:36.138118       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:11:36.138141       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:11:36.632878       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:11:36.669536       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:11:36.737887       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:11:36.745526       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:11:36.746617       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:11:36.750911       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:11:37.168844       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:11:37.999367       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:11:38.009738       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:11:38.016905       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:11:42.772397       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:42.775955       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:11:42.920890       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:11:43.272088       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [38076341860d86f5f36f9769243b7b1ca65f8dc159de5f30ead4db71abf60f87] <==
	I1108 09:11:42.138651       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:42.144905       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:11:42.144990       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:11:42.145049       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:11:42.145061       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:11:42.145068       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:11:42.151157       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:11:42.152345       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:11:42.153594       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="pause-322482" podCIDRs=["10.244.0.0/24"]
	I1108 09:11:42.155723       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:11:42.156919       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:11:42.167503       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:11:42.167901       1 shared_informer.go:356] "Caches are synced" controller="expand"
	I1108 09:11:42.168013       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:11:42.168028       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:11:42.169215       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:11:42.169234       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:11:42.169241       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:11:42.169304       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:11:42.170328       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:11:42.171549       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:11:42.171650       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:11:42.172938       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:11:42.188820       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:11:57.118250       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [91979a7219cae7fcc2f91539748221c1c0f903f566b4698a86093ee4145dddf5] <==
	I1108 09:11:43.691255       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:11:43.759005       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:11:43.859432       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:11:43.859462       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:11:43.859566       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:11:43.880206       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:11:43.880324       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:11:43.886361       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:11:43.886808       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:11:43.886843       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:11:43.888572       1 config.go:309] "Starting node config controller"
	I1108 09:11:43.888589       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:11:43.888597       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:11:43.888750       1 config.go:200] "Starting service config controller"
	I1108 09:11:43.888769       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:11:43.888772       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:11:43.888779       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:11:43.888816       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:11:43.888822       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:11:43.988837       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:11:43.988922       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:11:43.988971       1 shared_informer.go:356] "Caches are synced" controller="service config"
	
	
	==> kube-scheduler [353c9ecf18e5b89d713b70c635da41f336d2c82ed196f8c8f928aa08173cd730] <==
	E1108 09:11:35.193738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:11:35.193751       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:11:35.193777       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:11:35.193816       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:11:35.193823       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:11:35.193895       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:11:35.193910       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:11:35.194121       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:11:35.194452       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:11:36.004055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:11:36.074488       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:11:36.089080       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:11:36.097561       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:11:36.127029       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:11:36.195846       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:11:36.250242       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:11:36.263601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:11:36.265501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:11:36.276755       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:11:36.346357       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:11:36.370995       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:11:36.414619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:11:36.449711       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:11:36.465774       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	I1108 09:11:39.287614       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:11:39 pause-322482 kubelet[1327]: E1108 09:11:39.237682    1327 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-322482\" already exists" pod="kube-system/kube-scheduler-pause-322482"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.623599    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-pause-322482" podStartSLOduration=2.623575257 podStartE2EDuration="2.623575257s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.62351917 +0000 UTC m=+1.845903161" watchObservedRunningTime="2025-11-08 09:11:39.623575257 +0000 UTC m=+1.845959254"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.623770    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-pause-322482" podStartSLOduration=2.623756124 podStartE2EDuration="2.623756124s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.237777106 +0000 UTC m=+1.460161102" watchObservedRunningTime="2025-11-08 09:11:39.623756124 +0000 UTC m=+1.846140122"
	Nov 08 09:11:39 pause-322482 kubelet[1327]: I1108 09:11:39.716342    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-pause-322482" podStartSLOduration=2.716318963 podStartE2EDuration="2.716318963s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.716270544 +0000 UTC m=+1.938654540" watchObservedRunningTime="2025-11-08 09:11:39.716318963 +0000 UTC m=+1.938702963"
	Nov 08 09:11:40 pause-322482 kubelet[1327]: I1108 09:11:40.112254    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-pause-322482" podStartSLOduration=3.112231013 podStartE2EDuration="3.112231013s" podCreationTimestamp="2025-11-08 09:11:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:39.93932765 +0000 UTC m=+2.161711646" watchObservedRunningTime="2025-11-08 09:11:40.112231013 +0000 UTC m=+2.334615008"
	Nov 08 09:11:42 pause-322482 kubelet[1327]: I1108 09:11:42.192666    1327 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:11:42 pause-322482 kubelet[1327]: I1108 09:11:42.193264    1327 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.389865    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62h6n\" (UniqueName: \"kubernetes.io/projected/d014725c-c216-4b28-8694-a753f2d87b87-kube-api-access-62h6n\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.389928    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-cni-cfg\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390027    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-xtables-lock\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390097    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3e9e8a05-5439-48fb-9217-e23f242c9789-kube-proxy\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390124    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e9e8a05-5439-48fb-9217-e23f242c9789-lib-modules\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390144    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpvt8\" (UniqueName: \"kubernetes.io/projected/3e9e8a05-5439-48fb-9217-e23f242c9789-kube-api-access-fpvt8\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390172    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d014725c-c216-4b28-8694-a753f2d87b87-lib-modules\") pod \"kindnet-tst5j\" (UID: \"d014725c-c216-4b28-8694-a753f2d87b87\") " pod="kube-system/kindnet-tst5j"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.390201    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e9e8a05-5439-48fb-9217-e23f242c9789-xtables-lock\") pod \"kube-proxy-tbffl\" (UID: \"3e9e8a05-5439-48fb-9217-e23f242c9789\") " pod="kube-system/kube-proxy-tbffl"
	Nov 08 09:11:43 pause-322482 kubelet[1327]: I1108 09:11:43.932454    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-tst5j" podStartSLOduration=0.932409853 podStartE2EDuration="932.409853ms" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:43.910047025 +0000 UTC m=+6.132431022" watchObservedRunningTime="2025-11-08 09:11:43.932409853 +0000 UTC m=+6.154793849"
	Nov 08 09:11:47 pause-322482 kubelet[1327]: I1108 09:11:47.488223    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tbffl" podStartSLOduration=4.488198252 podStartE2EDuration="4.488198252s" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:43.935744098 +0000 UTC m=+6.158128095" watchObservedRunningTime="2025-11-08 09:11:47.488198252 +0000 UTC m=+9.710582251"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.255836    1327 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.376591    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e-config-volume\") pod \"coredns-66bc5c9577-8h2lz\" (UID: \"551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e\") " pod="kube-system/coredns-66bc5c9577-8h2lz"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.376649    1327 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwjkc\" (UniqueName: \"kubernetes.io/projected/551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e-kube-api-access-gwjkc\") pod \"coredns-66bc5c9577-8h2lz\" (UID: \"551b73d9-68f9-4ffc-bb3d-eb4650c8ce8e\") " pod="kube-system/coredns-66bc5c9577-8h2lz"
	Nov 08 09:11:54 pause-322482 kubelet[1327]: I1108 09:11:54.936104    1327 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8h2lz" podStartSLOduration=11.936081575 podStartE2EDuration="11.936081575s" podCreationTimestamp="2025-11-08 09:11:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:11:54.936083046 +0000 UTC m=+17.158467043" watchObservedRunningTime="2025-11-08 09:11:54.936081575 +0000 UTC m=+17.158465571"
	Nov 08 09:12:03 pause-322482 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:12:03 pause-322482 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:12:03 pause-322482 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:12:03 pause-322482 systemd[1]: kubelet.service: Consumed 1.138s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-322482 -n pause-322482
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-322482 -n pause-322482: exit status 2 (316.881589ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context pause-322482 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestPause/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/Pause (5.71s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (265.805887ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-339286 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context old-k8s-version-339286 describe deploy/metrics-server -n kube-system: exit status 1 (59.938683ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-339286 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-339286
helpers_test.go:243: (dbg) docker inspect old-k8s-version-339286:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	        "Created": "2025-11-08T09:15:31.664105217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 287845,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:15:31.706732575Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hosts",
	        "LogPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb-json.log",
	        "Name": "/old-k8s-version-339286",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-339286:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-339286",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	                "LowerDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-339286",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-339286/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-339286",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "dc1c3d21fdbc8658464837f841ebd0fea17ac3973e8f0fdd5c16f7ce3bd23a5a",
	            "SandboxKey": "/var/run/docker/netns/dc1c3d21fdbc",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33089"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33090"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33093"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33091"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33092"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-339286": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ea:41:97:dd:b4:3b",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "111659f5c16fa8de648fbd4b0737819906b512d8974c73538f9c6cac58753ac3",
	                    "EndpointID": "1f74a1a8a17f7c60d3a06b7303f9f384869f156862ceb95ea65cea6ec181746f",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-339286",
	                        "ce364047d86b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25: (1.085889062s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-732849 sudo journalctl -xeu kubelet --all --full --no-pager                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/kubernetes/kubelet.conf                                                                                                                   │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo docker system info                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cri-dockerd --version                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo containerd config dump                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                          │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:16:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:16:14.619702  302884 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:14.620015  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620022  302884 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:14.620029  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620497  302884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:16:14.621237  302884 out.go:368] Setting JSON to false
	I1108 09:16:14.623593  302884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3526,"bootTime":1762589849,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:16:14.624451  302884 start.go:143] virtualization: kvm guest
	I1108 09:16:14.626457  302884 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:16:14.629520  302884 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:16:14.629524  302884 notify.go:221] Checking for updates...
	I1108 09:16:14.631258  302884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:16:14.632595  302884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:16:14.634002  302884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:16:14.635485  302884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:16:14.636691  302884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:16:14.638679  302884 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638844  302884 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638954  302884 config.go:182] Loaded profile config "old-k8s-version-339286": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:16:14.639063  302884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:16:14.691152  302884 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:16:14.691332  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.813570  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.796199727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.813892  302884 docker.go:319] overlay module found
	I1108 09:16:14.816970  302884 out.go:179] * Using the docker driver based on user configuration
	I1108 09:16:14.818292  302884 start.go:309] selected driver: docker
	I1108 09:16:14.818348  302884 start.go:930] validating driver "docker" against <nil>
	I1108 09:16:14.818374  302884 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:16:14.819199  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.933520  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.916255188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.933793  302884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:16:14.934044  302884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.938665  302884 out.go:179] * Using Docker driver with root privileges
	I1108 09:16:14.940021  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:14.940170  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:14.940249  302884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:16:14.940569  302884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:14.943927  302884 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:16:14.945738  302884 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:16:14.946990  302884 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:16:14.067805  294020 cli_runner.go:164] Run: docker container inspect embed-certs-271910 --format={{.State.Status}}
	I1108 09:16:14.074060  294020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.074085  294020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.074146  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.103402  294020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.103432  294020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.103506  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.108099  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.132496  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.147070  294020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.201882  294020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.237829  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.253009  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.432416  294020 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:14.437859  294020 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271910" to be "Ready" ...
	I1108 09:16:14.957896  294020 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-271910" context rescaled to 1 replicas
	I1108 09:16:14.969443  294020 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:14.948456  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:14.948520  302884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:16:14.948532  302884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:16:14.948688  302884 cache.go:59] Caching tarball of preloaded images
	I1108 09:16:14.949020  302884 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:16:14.949079  302884 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:16:14.949215  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:14.949344  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json: {Name:mk5bfc4db394c708a6042a234b18539bd8dad38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:14.984638  302884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:16:14.984672  302884 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:16:14.984705  302884 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:16:14.984748  302884 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:14.984878  302884 start.go:364] duration metric: took 108.394µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:16:14.984915  302884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:16:14.985006  302884 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:16:10.370669  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	W1108 09:16:12.868173  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	I1108 09:16:14.398457  285556 node_ready.go:49] node "old-k8s-version-339286" is "Ready"
	I1108 09:16:14.398745  285556 node_ready.go:38] duration metric: took 13.534293684s for node "old-k8s-version-339286" to be "Ready" ...
	I1108 09:16:14.398779  285556 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:14.398863  285556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:14.426992  285556 api_server.go:72] duration metric: took 14.046193072s to wait for apiserver process to appear ...
	I1108 09:16:14.427020  285556 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:14.427040  285556 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:16:14.457535  285556 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:16:14.460756  285556 api_server.go:141] control plane version: v1.28.0
	I1108 09:16:14.460783  285556 api_server.go:131] duration metric: took 33.754556ms to wait for apiserver health ...
	I1108 09:16:14.460796  285556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:14.468460  285556 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:14.468503  285556 system_pods.go:61] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.468511  285556 system_pods.go:61] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.468519  285556 system_pods.go:61] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.468524  285556 system_pods.go:61] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.468530  285556 system_pods.go:61] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.468534  285556 system_pods.go:61] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.468539  285556 system_pods.go:61] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.468545  285556 system_pods.go:61] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.468553  285556 system_pods.go:74] duration metric: took 7.750133ms to wait for pod list to return data ...
	I1108 09:16:14.468563  285556 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:14.473761  285556 default_sa.go:45] found service account: "default"
	I1108 09:16:14.473786  285556 default_sa.go:55] duration metric: took 5.215828ms for default service account to be created ...
	I1108 09:16:14.473811  285556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:14.485871  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.485923  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.485932  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.485941  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.485953  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.485970  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.485975  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.485991  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.485998  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.486054  285556 retry.go:31] will retry after 246.902773ms: missing components: kube-dns
	I1108 09:16:14.744570  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.744609  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.744618  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.744627  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.744637  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.744643  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.744648  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.744653  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.744658  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Running
	I1108 09:16:14.744667  285556 system_pods.go:126] duration metric: took 270.849268ms to wait for k8s-apps to be running ...
	I1108 09:16:14.744677  285556 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:14.744731  285556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:14.769258  285556 system_svc.go:56] duration metric: took 24.56978ms WaitForService to wait for kubelet
	I1108 09:16:14.769309  285556 kubeadm.go:587] duration metric: took 14.388514306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.769556  285556 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:14.774712  285556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:14.774739  285556 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:14.774812  285556 node_conditions.go:105] duration metric: took 5.192043ms to run NodePressure ...
	I1108 09:16:14.774830  285556 start.go:242] waiting for startup goroutines ...
	I1108 09:16:14.774881  285556 start.go:247] waiting for cluster config update ...
	I1108 09:16:14.774895  285556 start.go:256] writing updated cluster config ...
	I1108 09:16:14.775329  285556 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:14.780932  285556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:14.790003  285556 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:14.428477  288696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.428494  288696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.428555  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.459240  288696 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.459267  288696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.459355  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.477655  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.497326  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.636260  288696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.677268  288696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.695739  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.805038  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:15.046647  288696 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:15.048786  288696 node_ready.go:35] waiting up to 6m0s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:15.350945  288696 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:15.801076  285556 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:16:15.801161  285556 pod_ready.go:86] duration metric: took 1.011063973s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.805636  285556 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.811600  285556 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.811650  285556 pod_ready.go:86] duration metric: took 5.984998ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.816583  285556 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.823575  285556 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.823606  285556 pod_ready.go:86] duration metric: took 6.946404ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.827507  285556 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.995157  285556 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.995188  285556 pod_ready.go:86] duration metric: took 167.654484ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.194993  285556 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.594916  285556 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:16:16.594953  285556 pod_ready.go:86] duration metric: took 399.929202ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.795274  285556 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194081  285556 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:16:17.194107  285556 pod_ready.go:86] duration metric: took 398.769764ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194123  285556 pod_ready.go:40] duration metric: took 2.41311476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:17.240446  285556 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:16:17.242415  285556 out.go:203] 
	W1108 09:16:17.243926  285556 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:16:17.248943  285556 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:16:17.250772  285556 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:16:15.355429  288696 addons.go:515] duration metric: took 994.950876ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:16:15.554093  288696 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-220714" context rescaled to 1 replicas
	W1108 09:16:17.051497  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:14.970722  294020 addons.go:515] duration metric: took 934.784036ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:16:16.442258  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:14.988644  302884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:16:14.988941  302884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:14.988979  302884 client.go:173] LocalClient.Create starting
	I1108 09:16:14.989121  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:16:14.989164  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989194  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989303  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:16:14.989337  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989349  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989787  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:16:15.020585  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:16:15.020664  302884 network_create.go:284] running [docker network inspect default-k8s-diff-port-677902] to gather additional debugging logs...
	I1108 09:16:15.020681  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902
	W1108 09:16:15.047609  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 returned with exit code 1
	I1108 09:16:15.047686  302884 network_create.go:287] error running [docker network inspect default-k8s-diff-port-677902]: docker network inspect default-k8s-diff-port-677902: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-677902 not found
	I1108 09:16:15.047745  302884 network_create.go:289] output of [docker network inspect default-k8s-diff-port-677902]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-677902 not found
	
	** /stderr **
	I1108 09:16:15.048043  302884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:15.076013  302884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:16:15.076913  302884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:16:15.077960  302884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:16:15.079133  302884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec8b10}
	I1108 09:16:15.079166  302884 network_create.go:124] attempt to create docker network default-k8s-diff-port-677902 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:16:15.079219  302884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 default-k8s-diff-port-677902
	I1108 09:16:15.171652  302884 network_create.go:108] docker network default-k8s-diff-port-677902 192.168.76.0/24 created
	I1108 09:16:15.171687  302884 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-677902" container
	I1108 09:16:15.171753  302884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:16:15.199943  302884 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-677902 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:16:15.225618  302884 oci.go:103] Successfully created a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.225772  302884 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-677902-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --entrypoint /usr/bin/test -v default-k8s-diff-port-677902:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:16:15.866328  302884 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.866376  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:15.866401  302884 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:16:15.866471  302884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:16:19.052301  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:21.552514  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:20.584332  302884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.717760526s)
	I1108 09:16:20.584367  302884 kic.go:203] duration metric: took 4.717962939s to extract preloaded images to volume ...
	W1108 09:16:20.584469  302884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:16:20.584509  302884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:16:20.584562  302884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:16:20.649658  302884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-677902 --name default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --network default-k8s-diff-port-677902 --ip 192.168.76.2 --volume default-k8s-diff-port-677902:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:16:20.985463  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Running}}
	I1108 09:16:21.005078  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.023858  302884 cli_runner.go:164] Run: docker exec default-k8s-diff-port-677902 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:16:21.072397  302884 oci.go:144] the created container "default-k8s-diff-port-677902" has a running status.
	I1108 09:16:21.072432  302884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa...
	I1108 09:16:21.328004  302884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:16:21.358901  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.381864  302884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:16:21.381926  302884 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-677902 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:16:21.429674  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.450173  302884 machine.go:94] provisionDockerMachine start ...
	I1108 09:16:21.450256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.471253  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.471544  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.471559  302884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:16:21.604466  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.604500  302884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:16:21.604558  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.625801  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.626035  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.626052  302884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:16:21.767180  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.767256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.786052  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.786341  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.786363  302884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:16:21.917181  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:16:21.917219  302884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:16:21.917239  302884 ubuntu.go:190] setting up certificates
	I1108 09:16:21.917247  302884 provision.go:84] configureAuth start
	I1108 09:16:21.917317  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:21.935307  302884 provision.go:143] copyHostCerts
	I1108 09:16:21.935370  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:16:21.935382  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:16:21.935449  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:16:21.935553  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:16:21.935562  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:16:21.935591  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:16:21.935701  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:16:21.935713  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:16:21.935739  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:16:21.935803  302884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:16:22.042345  302884 provision.go:177] copyRemoteCerts
	I1108 09:16:22.042398  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:16:22.042450  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.062501  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.156803  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:16:22.176432  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:16:22.194210  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:16:22.212199  302884 provision.go:87] duration metric: took 294.93803ms to configureAuth
	I1108 09:16:22.212230  302884 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:16:22.212437  302884 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:22.212551  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.231181  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:22.231443  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:22.231463  302884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:16:22.470271  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:16:22.470308  302884 machine.go:97] duration metric: took 1.020112912s to provisionDockerMachine
	I1108 09:16:22.470320  302884 client.go:176] duration metric: took 7.481335007s to LocalClient.Create
	I1108 09:16:22.470341  302884 start.go:167] duration metric: took 7.481404005s to libmachine.API.Create "default-k8s-diff-port-677902"
	I1108 09:16:22.470350  302884 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:22.470362  302884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:16:22.470433  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:16:22.470471  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.490818  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.586821  302884 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:16:22.590810  302884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:16:22.590839  302884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:16:22.590852  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:16:22.591149  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:16:22.591343  302884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:16:22.591507  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:16:22.600330  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:22.620675  302884 start.go:296] duration metric: took 150.312864ms for postStartSetup
	I1108 09:16:22.621005  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.638917  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:22.639195  302884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:16:22.639233  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.658713  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.750655  302884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:16:22.755273  302884 start.go:128] duration metric: took 7.770253809s to createHost
	I1108 09:16:22.755312  302884 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 7.770414218s
	I1108 09:16:22.755394  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.773899  302884 ssh_runner.go:195] Run: cat /version.json
	I1108 09:16:22.773917  302884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:16:22.773948  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.773974  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.794752  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.795127  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.889663  302884 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:22.942216  302884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:16:22.977581  302884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:16:22.982348  302884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:16:22.982411  302884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:16:23.008837  302884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:16:23.008860  302884 start.go:496] detecting cgroup driver to use...
	I1108 09:16:23.008896  302884 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:16:23.008949  302884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:16:23.025177  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:16:23.037624  302884 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:16:23.037681  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:16:23.054660  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:16:23.073210  302884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:16:23.155568  302884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:16:23.244179  302884 docker.go:234] disabling docker service ...
	I1108 09:16:23.244249  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:16:23.263226  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:16:23.276679  302884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:16:23.369719  302884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:16:23.452958  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:16:23.465534  302884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:16:23.480351  302884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:16:23.480429  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.490576  302884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:16:23.490636  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.499772  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.508365  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.517456  302884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:16:23.525954  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.535277  302884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.549170  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.558258  302884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:16:23.565676  302884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:16:23.573369  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:23.653541  302884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:16:23.767673  302884 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:16:23.767729  302884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:16:23.771780  302884 start.go:564] Will wait 60s for crictl version
	I1108 09:16:23.771829  302884 ssh_runner.go:195] Run: which crictl
	I1108 09:16:23.775330  302884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:16:23.799928  302884 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:16:23.800010  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.827743  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.857164  302884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:16:18.941803  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:20.942622  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:23.441685  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:23.858390  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:23.875734  302884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:16:23.879850  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:23.890489  302884 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:16:23.890611  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:23.890671  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.922889  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.922910  302884 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:16:23.922950  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.948186  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.948207  302884 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:16:23.948214  302884 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:16:23.948333  302884 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:16:23.948416  302884 ssh_runner.go:195] Run: crio config
	I1108 09:16:23.994577  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:23.994603  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:23.994707  302884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:16:23.994758  302884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:16:23.994909  302884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:16:23.994977  302884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:16:24.003550  302884 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:16:24.003613  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:16:24.011668  302884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:16:24.025570  302884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:16:24.040656  302884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:16:24.053685  302884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:16:24.057813  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:24.068090  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:24.153388  302884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:24.180756  302884 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:16:24.180778  302884 certs.go:195] generating shared ca certs ...
	I1108 09:16:24.180792  302884 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.180962  302884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:16:24.181003  302884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:16:24.181013  302884 certs.go:257] generating profile certs ...
	I1108 09:16:24.181084  302884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:16:24.181110  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt with IP's: []
	I1108 09:16:24.249417  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt ...
	I1108 09:16:24.249443  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt: {Name:mkb0424a7b2244acd4c9b08e8fd3832ca89c8cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249643  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key ...
	I1108 09:16:24.249660  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key: {Name:mk98228a5537d26558a0a8aa80142320b934942d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249773  302884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:16:24.249793  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:16:24.369815  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 ...
	I1108 09:16:24.369843  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273: {Name:mkfff96a8818db7317888f2704b4dce1877844fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370020  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 ...
	I1108 09:16:24.370036  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273: {Name:mkd7e2641bb265c1b14bb815272c25677391281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370138  302884 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt
	I1108 09:16:24.370218  302884 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key
	I1108 09:16:24.370275  302884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:16:24.370302  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt with IP's: []
	I1108 09:16:24.474350  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt ...
	I1108 09:16:24.474381  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt: {Name:mk129990eb5be69a3128d0b5b94ee200eae7c775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474565  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key ...
	I1108 09:16:24.474588  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key: {Name:mk588b95436fa4f4c5adaa76c8236e776fdef198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474803  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:16:24.474841  302884 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:16:24.474852  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:16:24.474873  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:16:24.474894  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:16:24.474915  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:16:24.474951  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:24.475489  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:16:24.494518  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:16:24.512401  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:16:24.530678  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:16:24.548124  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:16:24.566472  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:16:24.584982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:16:24.603982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	W1108 09:16:24.051828  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:26.552224  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:27.551990  288696 node_ready.go:49] node "no-preload-220714" is "Ready"
	I1108 09:16:27.552021  288696 node_ready.go:38] duration metric: took 12.503203095s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:27.552043  288696 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:27.552094  288696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:27.567072  288696 api_server.go:72] duration metric: took 13.20624104s to wait for apiserver process to appear ...
	I1108 09:16:27.567097  288696 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:27.567115  288696 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1108 09:16:27.571234  288696 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1108 09:16:27.572225  288696 api_server.go:141] control plane version: v1.34.1
	I1108 09:16:27.572252  288696 api_server.go:131] duration metric: took 5.147393ms to wait for apiserver health ...
	I1108 09:16:27.572262  288696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:27.575571  288696 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:27.575606  288696 system_pods.go:61] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.575613  288696 system_pods.go:61] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.575621  288696 system_pods.go:61] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.575627  288696 system_pods.go:61] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.575636  288696 system_pods.go:61] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.575642  288696 system_pods.go:61] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.575649  288696 system_pods.go:61] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.575656  288696 system_pods.go:61] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.575667  288696 system_pods.go:74] duration metric: took 3.395544ms to wait for pod list to return data ...
	I1108 09:16:27.575676  288696 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:27.578421  288696 default_sa.go:45] found service account: "default"
	I1108 09:16:27.578442  288696 default_sa.go:55] duration metric: took 2.756827ms for default service account to be created ...
	I1108 09:16:27.578453  288696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:27.581851  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.581882  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.581890  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.581898  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.581904  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.581909  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.581914  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.581918  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.581925  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.582377  288696 retry.go:31] will retry after 309.619866ms: missing components: kube-dns
	I1108 09:16:27.897123  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.897166  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.897176  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.897183  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.897189  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.897196  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.897201  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.897206  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.897213  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.897230  288696 retry.go:31] will retry after 292.226039ms: missing components: kube-dns
	
	
	==> CRI-O <==
	Nov 08 09:16:14 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:14.401309749Z" level=info msg="Starting container: a4bc1e665af61a33b86285a0c13a2e5cb6260bb49123fe7189ca97ccb4569329" id=8851a95a-0061-4e6a-93df-112b100b83e3 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:14 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:14.406879487Z" level=info msg="Started container" PID=2081 containerID=a4bc1e665af61a33b86285a0c13a2e5cb6260bb49123fe7189ca97ccb4569329 description=kube-system/coredns-5dd5756b68-88pvx/coredns id=8851a95a-0061-4e6a-93df-112b100b83e3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=75c14cc7d643d527bbc26bb34c21629b6149ab53410860e24738bde9d1582d9c
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.717779186Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fe8b5163-b37f-4730-ae7f-c7cf402ceb79 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.717892642Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.723711452Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d6e5770865fcb6b68628d20dc793016645779416b4ef6ec91c09401cd5aa30bd UID:8691aea8-c976-4b06-9771-235555a5cebc NetNS:/var/run/netns/8bd91677-f977-4601-99b2-74dd0786129b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012b178}] Aliases:map[]}"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.723748362Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.733565719Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:d6e5770865fcb6b68628d20dc793016645779416b4ef6ec91c09401cd5aa30bd UID:8691aea8-c976-4b06-9771-235555a5cebc NetNS:/var/run/netns/8bd91677-f977-4601-99b2-74dd0786129b Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00012b178}] Aliases:map[]}"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.733691899Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.734443232Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.735257467Z" level=info msg="Ran pod sandbox d6e5770865fcb6b68628d20dc793016645779416b4ef6ec91c09401cd5aa30bd with infra container: default/busybox/POD" id=fe8b5163-b37f-4730-ae7f-c7cf402ceb79 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.736468775Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4a0ff975-dd1a-4fdd-b242-ed72811763e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.736603481Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4a0ff975-dd1a-4fdd-b242-ed72811763e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.736652327Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4a0ff975-dd1a-4fdd-b242-ed72811763e0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.737130629Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=66cc6812-1a09-436b-9665-e0a329cf4da3 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:17 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:17.739993826Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.574209809Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=66cc6812-1a09-436b-9665-e0a329cf4da3 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.575058247Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=08633399-73af-4e41-9fc0-e82d7c10c6cb name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.577443106Z" level=info msg="Creating container: default/busybox/busybox" id=b51da822-e2fd-4ec5-acd6-3cf8f4cf9478 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.57758747Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.58336887Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.583822735Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.620990514Z" level=info msg="Created container a2d3d8f72d34d79dd26bdf599b6cccaea7a2d097ce2a1272f5e688c59a44d217: default/busybox/busybox" id=b51da822-e2fd-4ec5-acd6-3cf8f4cf9478 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.621641076Z" level=info msg="Starting container: a2d3d8f72d34d79dd26bdf599b6cccaea7a2d097ce2a1272f5e688c59a44d217" id=ca5aec98-d41a-4177-a894-f76770ea7b9f name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:20 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:20.623783479Z" level=info msg="Started container" PID=2159 containerID=a2d3d8f72d34d79dd26bdf599b6cccaea7a2d097ce2a1272f5e688c59a44d217 description=default/busybox/busybox id=ca5aec98-d41a-4177-a894-f76770ea7b9f name=/runtime.v1.RuntimeService/StartContainer sandboxID=d6e5770865fcb6b68628d20dc793016645779416b4ef6ec91c09401cd5aa30bd
	Nov 08 09:16:27 old-k8s-version-339286 crio[774]: time="2025-11-08T09:16:27.496716067Z" level=error msg="Unhandled Error: unable to upgrade websocket connection: websocket server finished before becoming ready (logger=\"UnhandledError\")"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                              NAMESPACE
	a2d3d8f72d34d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   8 seconds ago       Running             busybox                   0                   d6e5770865fcb       busybox                                          default
	a4bc1e665af61       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 seconds ago      Running             coredns                   0                   75c14cc7d643d       coredns-5dd5756b68-88pvx                         kube-system
	32f2f9934d82c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      14 seconds ago      Running             storage-provisioner       0                   00aa1195bf2b0       storage-provisioner                              kube-system
	2803aa3e5f0a2       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    25 seconds ago      Running             kindnet-cni               0                   8992ba657c729       kindnet-6d922                                    kube-system
	3a3171f9219b2       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                      27 seconds ago      Running             kube-proxy                0                   dd748563e9fe6       kube-proxy-v4l6x                                 kube-system
	f641d2a7bdd79       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      45 seconds ago      Running             etcd                      0                   b6392e04f04bd       etcd-old-k8s-version-339286                      kube-system
	9726a3a91a2d1       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                      45 seconds ago      Running             kube-scheduler            0                   a0a146ad2dabe       kube-scheduler-old-k8s-version-339286            kube-system
	8d8bf6138b283       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                      45 seconds ago      Running             kube-controller-manager   0                   a20f850d012c9       kube-controller-manager-old-k8s-version-339286   kube-system
	54a0f50560680       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                      45 seconds ago      Running             kube-apiserver            0                   3f38004d041db       kube-apiserver-old-k8s-version-339286            kube-system
	
	
	==> coredns [a4bc1e665af61a33b86285a0c13a2e5cb6260bb49123fe7189ca97ccb4569329] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:39750 - 6455 "HINFO IN 4962772302030731348.739962771808409858. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.021862046s
	
	
	==> describe nodes <==
	Name:               old-k8s-version-339286
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-339286
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-339286
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_15_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:15:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-339286
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:16:19 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:16:20 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:16:20 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:16:20 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:16:20 +0000   Sat, 08 Nov 2025 09:16:13 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-339286
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                67b4f6ec-c7a7-47b7-a68b-0baf0383287f
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  kube-system                 coredns-5dd5756b68-88pvx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     27s
	  kube-system                 etcd-old-k8s-version-339286                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         40s
	  kube-system                 kindnet-6d922                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      28s
	  kube-system                 kube-apiserver-old-k8s-version-339286             250m (3%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-controller-manager-old-k8s-version-339286    200m (2%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 kube-proxy-v4l6x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         28s
	  kube-system                 kube-scheduler-old-k8s-version-339286             100m (1%)     0 (0%)      0 (0%)           0 (0%)         40s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27s                kube-proxy       
	  Normal  Starting                 46s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  46s (x8 over 46s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x8 over 46s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x8 over 46s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  Starting                 40s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  40s                kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    40s                kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     40s                kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           28s                node-controller  Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller
	  Normal  NodeReady                15s                kubelet          Node old-k8s-version-339286 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [f641d2a7bdd79e84f1a204a45dcb50913fd482162e257a681f2f3295f6888b0b] <==
	{"level":"warn","ts":"2025-11-08T09:15:48.993066Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"214.820026ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/old-k8s-version-339286\" ","response":"range_response_count:1 size:4972"}
	{"level":"info","ts":"2025-11-08T09:15:48.993098Z","caller":"traceutil/trace.go:171","msg":"trace[765872542] range","detail":"{range_begin:/registry/minions/old-k8s-version-339286; range_end:; response_count:1; response_revision:272; }","duration":"214.88942ms","start":"2025-11-08T09:15:48.778199Z","end":"2025-11-08T09:15:48.993089Z","steps":["trace[765872542] 'agreement among raft nodes before linearized reading'  (duration: 214.750685ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:15:49.237621Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.863728ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789876010641005 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/old-k8s-version-339286\" mod_revision:250 > success:<request_put:<key:\"/registry/minions/old-k8s-version-339286\" value_size:4891 >> failure:<request_range:<key:\"/registry/minions/old-k8s-version-339286\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:15:49.237819Z","caller":"traceutil/trace.go:171","msg":"trace[89819466] transaction","detail":"{read_only:false; response_revision:275; number_of_response:1; }","duration":"217.866562ms","start":"2025-11-08T09:15:49.01993Z","end":"2025-11-08T09:15:49.237797Z","steps":["trace[89819466] 'process raft request'  (duration: 217.782694ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:15:49.23787Z","caller":"traceutil/trace.go:171","msg":"trace[435551578] transaction","detail":"{read_only:false; response_revision:274; number_of_response:1; }","duration":"221.324474ms","start":"2025-11-08T09:15:49.016513Z","end":"2025-11-08T09:15:49.237837Z","steps":["trace[435551578] 'process raft request'  (duration: 109.259604ms)","trace[435551578] 'compare'  (duration: 110.776996ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:15:49.573453Z","caller":"traceutil/trace.go:171","msg":"trace[1749720589] linearizableReadLoop","detail":"{readStateIndex:286; appliedIndex:285; }","duration":"109.898503ms","start":"2025-11-08T09:15:49.463534Z","end":"2025-11-08T09:15:49.573433Z","steps":["trace[1749720589] 'read index received'  (duration: 70.194911ms)","trace[1749720589] 'applied index is now lower than readState.Index'  (duration: 39.702481ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:15:49.573488Z","caller":"traceutil/trace.go:171","msg":"trace[2000133340] transaction","detail":"{read_only:false; response_revision:279; number_of_response:1; }","duration":"124.525872ms","start":"2025-11-08T09:15:49.448932Z","end":"2025-11-08T09:15:49.573458Z","steps":["trace[2000133340] 'process raft request'  (duration: 84.787632ms)","trace[2000133340] 'compare'  (duration: 39.600046ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:15:49.573649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"110.120538ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-old-k8s-version-339286\" ","response":"range_response_count:1 size:2966"}
	{"level":"info","ts":"2025-11-08T09:15:49.573701Z","caller":"traceutil/trace.go:171","msg":"trace[500377432] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-old-k8s-version-339286; range_end:; response_count:1; response_revision:279; }","duration":"110.191299ms","start":"2025-11-08T09:15:49.463498Z","end":"2025-11-08T09:15:49.573689Z","steps":["trace[500377432] 'agreement among raft nodes before linearized reading'  (duration: 110.010161ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:15:49.575645Z","caller":"traceutil/trace.go:171","msg":"trace[880779389] transaction","detail":"{read_only:false; number_of_response:0; response_revision:279; }","duration":"110.902486ms","start":"2025-11-08T09:15:49.46473Z","end":"2025-11-08T09:15:49.575632Z","steps":["trace[880779389] 'process raft request'  (duration: 110.767585ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:15:49.575672Z","caller":"traceutil/trace.go:171","msg":"trace[313763787] transaction","detail":"{read_only:false; number_of_response:0; response_revision:279; }","duration":"110.944399ms","start":"2025-11-08T09:15:49.464714Z","end":"2025-11-08T09:15:49.575659Z","steps":["trace[313763787] 'process raft request'  (duration: 110.840275ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:15:49.575646Z","caller":"traceutil/trace.go:171","msg":"trace[959899497] transaction","detail":"{read_only:false; number_of_response:0; response_revision:279; }","duration":"110.813021ms","start":"2025-11-08T09:15:49.464823Z","end":"2025-11-08T09:15:49.575636Z","steps":["trace[959899497] 'process raft request'  (duration: 110.770351ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:15:49.78423Z","caller":"traceutil/trace.go:171","msg":"trace[1789365734] transaction","detail":"{read_only:false; response_revision:284; number_of_response:1; }","duration":"170.772838ms","start":"2025-11-08T09:15:49.613438Z","end":"2025-11-08T09:15:49.784211Z","steps":["trace[1789365734] 'process raft request'  (duration: 129.344139ms)","trace[1789365734] 'compare'  (duration: 41.316026ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:15:49.948598Z","caller":"traceutil/trace.go:171","msg":"trace[31749522] transaction","detail":"{read_only:false; response_revision:285; number_of_response:1; }","duration":"155.236443ms","start":"2025-11-08T09:15:49.793339Z","end":"2025-11-08T09:15:49.948575Z","steps":["trace[31749522] 'process raft request'  (duration: 130.498287ms)","trace[31749522] 'compare'  (duration: 24.62246ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:15:50.872407Z","caller":"traceutil/trace.go:171","msg":"trace[747610101] transaction","detail":"{read_only:false; response_revision:291; number_of_response:1; }","duration":"252.054126ms","start":"2025-11-08T09:15:50.62033Z","end":"2025-11-08T09:15:50.872384Z","steps":["trace[747610101] 'process raft request'  (duration: 186.011127ms)","trace[747610101] 'compare'  (duration: 65.929681ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:15:51.139006Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.525864ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789876010641040 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/kindnet\" mod_revision:0 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/kindnet\" value_size:452 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:15:51.139102Z","caller":"traceutil/trace.go:171","msg":"trace[1149576860] transaction","detail":"{read_only:false; response_revision:292; number_of_response:1; }","duration":"258.116952ms","start":"2025-11-08T09:15:50.880968Z","end":"2025-11-08T09:15:51.139085Z","steps":["trace[1149576860] 'process raft request'  (duration: 129.447303ms)","trace[1149576860] 'compare'  (duration: 128.399138ms)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:15:55.712093Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"127.492388ms","expected-duration":"100ms","prefix":"","request":"header:<ID:13873789876010641065 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-4aco2ydb4r5harxuuaby4xazd4\" mod_revision:20 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-4aco2ydb4r5harxuuaby4xazd4\" value_size:614 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-4aco2ydb4r5harxuuaby4xazd4\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-11-08T09:15:55.712184Z","caller":"traceutil/trace.go:171","msg":"trace[71339986] transaction","detail":"{read_only:false; response_revision:300; number_of_response:1; }","duration":"212.716559ms","start":"2025-11-08T09:15:55.499451Z","end":"2025-11-08T09:15:55.712167Z","steps":["trace[71339986] 'process raft request'  (duration: 85.06384ms)","trace[71339986] 'compare'  (duration: 127.363284ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:16:01.287642Z","caller":"traceutil/trace.go:171","msg":"trace[172734538] transaction","detail":"{read_only:false; number_of_response:1; response_revision:388; }","duration":"153.356588ms","start":"2025-11-08T09:16:01.134263Z","end":"2025-11-08T09:16:01.28762Z","steps":["trace[172734538] 'process raft request'  (duration: 61.954498ms)","trace[172734538] 'compare'  (duration: 91.113971ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:16:01.287758Z","caller":"traceutil/trace.go:171","msg":"trace[1651454545] transaction","detail":"{read_only:false; response_revision:389; number_of_response:1; }","duration":"150.461405ms","start":"2025-11-08T09:16:01.137276Z","end":"2025-11-08T09:16:01.287738Z","steps":["trace[1651454545] 'process raft request'  (duration: 150.255387ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:16:03.638969Z","caller":"traceutil/trace.go:171","msg":"trace[1897971839] transaction","detail":"{read_only:false; response_revision:411; number_of_response:1; }","duration":"129.930027ms","start":"2025-11-08T09:16:03.509023Z","end":"2025-11-08T09:16:03.638953Z","steps":["trace[1897971839] 'process raft request'  (duration: 129.825874ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:16:03.771829Z","caller":"traceutil/trace.go:171","msg":"trace[337797147] transaction","detail":"{read_only:false; response_revision:412; number_of_response:1; }","duration":"127.379185ms","start":"2025-11-08T09:16:03.644431Z","end":"2025-11-08T09:16:03.77181Z","steps":["trace[337797147] 'process raft request'  (duration: 127.246687ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:16:19.514834Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"102.146927ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:2196"}
	{"level":"info","ts":"2025-11-08T09:16:19.514932Z","caller":"traceutil/trace.go:171","msg":"trace[776998591] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:452; }","duration":"102.261808ms","start":"2025-11-08T09:16:19.412654Z","end":"2025-11-08T09:16:19.514916Z","steps":["trace[776998591] 'range keys from in-memory index tree'  (duration: 102.041458ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:29 up 58 min,  0 user,  load average: 5.47, 3.91, 2.44
	Linux old-k8s-version-339286 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [2803aa3e5f0a2ceea0fe0cfd1ab12cd66da36d59612ddc51c47c25149406f563] <==
	I1108 09:16:03.431002       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:03.431311       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:16:03.431488       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:03.431505       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:03.431527       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:03Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:03.680103       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:03.680271       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:03.680304       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:03.680538       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:04.130409       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:04.130447       1 metrics.go:72] Registering metrics
	I1108 09:16:04.130526       1 controller.go:711] "Syncing nftables rules"
	I1108 09:16:13.682708       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:16:13.682787       1 main.go:301] handling current node
	I1108 09:16:23.680038       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:16:23.680091       1 main.go:301] handling current node
	
	
	==> kube-apiserver [54a0f505606805355be0c8c02da3fa85d8f4c8479c571249d92855b5f87dc9c1] <==
	I1108 09:15:45.183411       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 09:15:45.184601       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:15:45.185256       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:15:45.185301       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:15:45.185313       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:15:45.185321       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:15:45.185328       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:15:45.188189       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:15:45.188210       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:15:45.219696       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:15:46.089337       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:15:46.093196       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:15:46.093294       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:15:46.582015       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:15:46.620160       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:15:46.693756       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:15:46.699668       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1108 09:15:46.700912       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 09:15:46.707702       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:15:47.165657       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:15:48.783868       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:15:49.323901       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:15:49.405141       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I1108 09:16:00.680005       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 09:16:00.925972       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8d8bf6138b283e247b09976d004b9dba91390afeeaff6d5235df1d39ac7ed3d8] <==
	I1108 09:16:00.220530       1 event.go:307] "Event occurred" object="old-k8s-version-339286" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller"
	I1108 09:16:00.220584       1 node_lifecycle_controller.go:1029] "Controller detected that all Nodes are not-Ready. Entering master disruption mode"
	I1108 09:16:00.222465       1 shared_informer.go:318] Caches are synced for daemon sets
	I1108 09:16:00.227476       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:16:00.545838       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:16:00.614484       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:16:00.614520       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:16:00.685998       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-5dd5756b68 to 2"
	I1108 09:16:00.895413       1 event.go:307] "Event occurred" object="kube-system/coredns" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-5dd5756b68 to 1 from 2"
	I1108 09:16:00.939094       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-v4l6x"
	I1108 09:16:00.941470       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-6d922"
	I1108 09:16:01.032778       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-dn6fp"
	I1108 09:16:01.044762       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-5dd5756b68-88pvx"
	I1108 09:16:01.074570       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="388.991355ms"
	I1108 09:16:01.289722       1 event.go:307] "Event occurred" object="kube-system/coredns-5dd5756b68" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-5dd5756b68-dn6fp"
	I1108 09:16:01.442444       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="367.815446ms"
	I1108 09:16:01.453885       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="11.379628ms"
	I1108 09:16:01.468683       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="14.731484ms"
	I1108 09:16:01.468821       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.769µs"
	I1108 09:16:13.985525       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="149.967µs"
	I1108 09:16:14.000744       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="114.799µs"
	I1108 09:16:14.600551       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="131.084µs"
	I1108 09:16:15.222774       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I1108 09:16:15.568617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="12.37107ms"
	I1108 09:16:15.569746       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="99.019µs"
	
	
	==> kube-proxy [3a3171f9219b21c6d1f4ce23515586476714e970e62d96fd725816c35135238d] <==
	I1108 09:16:01.668719       1 server_others.go:69] "Using iptables proxy"
	I1108 09:16:01.681171       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1108 09:16:01.701246       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:01.703689       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:16:01.703736       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:16:01.703750       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:16:01.703786       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:16:01.704075       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:16:01.704095       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:01.704721       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:16:01.705547       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:16:01.705622       1 config.go:315] "Starting node config controller"
	I1108 09:16:01.706188       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:16:01.705768       1 config.go:188] "Starting service config controller"
	I1108 09:16:01.706238       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:16:01.806826       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:16:01.806877       1 shared_informer.go:318] Caches are synced for node config
	I1108 09:16:01.807058       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [9726a3a91a2d1e3819efe92c292e40dfcf9eb88437b633bb1424b41cf6542a28] <==
	E1108 09:15:45.170792       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W1108 09:15:45.170509       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1108 09:15:45.170815       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W1108 09:15:45.170838       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 09:15:45.170850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 09:15:45.170864       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W1108 09:15:45.170867       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1108 09:15:45.170886       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1108 09:15:45.170897       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W1108 09:15:45.171071       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1108 09:15:45.171087       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W1108 09:15:45.984021       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1108 09:15:45.984067       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W1108 09:15:46.029099       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1108 09:15:46.029152       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W1108 09:15:46.097807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1108 09:15:46.097847       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W1108 09:15:46.129421       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1108 09:15:46.129452       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W1108 09:15:46.139807       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1108 09:15:46.139850       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W1108 09:15:46.478332       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1108 09:15:46.478455       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I1108 09:15:48.266315       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:16:00 old-k8s-version-339286 kubelet[1392]: I1108 09:16:00.092386    1392 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:16:00 old-k8s-version-339286 kubelet[1392]: I1108 09:16:00.093100    1392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:16:00 old-k8s-version-339286 kubelet[1392]: I1108 09:16:00.947094    1392 topology_manager.go:215] "Topology Admit Handler" podUID="f25a3fb9-ffeb-44b3-b462-966272e7b376" podNamespace="kube-system" podName="kindnet-6d922"
	Nov 08 09:16:00 old-k8s-version-339286 kubelet[1392]: I1108 09:16:00.950813    1392 topology_manager.go:215] "Topology Admit Handler" podUID="c75d7f1b-4515-4c79-a0c2-87f23912d198" podNamespace="kube-system" podName="kube-proxy-v4l6x"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116521    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f25a3fb9-ffeb-44b3-b462-966272e7b376-cni-cfg\") pod \"kindnet-6d922\" (UID: \"f25a3fb9-ffeb-44b3-b462-966272e7b376\") " pod="kube-system/kindnet-6d922"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116591    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f25a3fb9-ffeb-44b3-b462-966272e7b376-lib-modules\") pod \"kindnet-6d922\" (UID: \"f25a3fb9-ffeb-44b3-b462-966272e7b376\") " pod="kube-system/kindnet-6d922"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116632    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c75d7f1b-4515-4c79-a0c2-87f23912d198-xtables-lock\") pod \"kube-proxy-v4l6x\" (UID: \"c75d7f1b-4515-4c79-a0c2-87f23912d198\") " pod="kube-system/kube-proxy-v4l6x"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116669    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdc46\" (UniqueName: \"kubernetes.io/projected/c75d7f1b-4515-4c79-a0c2-87f23912d198-kube-api-access-hdc46\") pod \"kube-proxy-v4l6x\" (UID: \"c75d7f1b-4515-4c79-a0c2-87f23912d198\") " pod="kube-system/kube-proxy-v4l6x"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116701    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f25a3fb9-ffeb-44b3-b462-966272e7b376-xtables-lock\") pod \"kindnet-6d922\" (UID: \"f25a3fb9-ffeb-44b3-b462-966272e7b376\") " pod="kube-system/kindnet-6d922"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116726    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c75d7f1b-4515-4c79-a0c2-87f23912d198-lib-modules\") pod \"kube-proxy-v4l6x\" (UID: \"c75d7f1b-4515-4c79-a0c2-87f23912d198\") " pod="kube-system/kube-proxy-v4l6x"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116757    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phzsk\" (UniqueName: \"kubernetes.io/projected/f25a3fb9-ffeb-44b3-b462-966272e7b376-kube-api-access-phzsk\") pod \"kindnet-6d922\" (UID: \"f25a3fb9-ffeb-44b3-b462-966272e7b376\") " pod="kube-system/kindnet-6d922"
	Nov 08 09:16:01 old-k8s-version-339286 kubelet[1392]: I1108 09:16:01.116783    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c75d7f1b-4515-4c79-a0c2-87f23912d198-kube-proxy\") pod \"kube-proxy-v4l6x\" (UID: \"c75d7f1b-4515-4c79-a0c2-87f23912d198\") " pod="kube-system/kube-proxy-v4l6x"
	Nov 08 09:16:03 old-k8s-version-339286 kubelet[1392]: I1108 09:16:03.640645    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-6d922" podStartSLOduration=2.045204874 podCreationTimestamp="2025-11-08 09:16:00 +0000 UTC" firstStartedPulling="2025-11-08 09:16:01.560564387 +0000 UTC m=+13.225991586" lastFinishedPulling="2025-11-08 09:16:03.155943634 +0000 UTC m=+14.821370841" observedRunningTime="2025-11-08 09:16:03.640423845 +0000 UTC m=+15.305851053" watchObservedRunningTime="2025-11-08 09:16:03.640584129 +0000 UTC m=+15.306011341"
	Nov 08 09:16:03 old-k8s-version-339286 kubelet[1392]: I1108 09:16:03.640770    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-v4l6x" podStartSLOduration=3.640751961 podCreationTimestamp="2025-11-08 09:16:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:02.513674057 +0000 UTC m=+14.179101265" watchObservedRunningTime="2025-11-08 09:16:03.640751961 +0000 UTC m=+15.306179168"
	Nov 08 09:16:13 old-k8s-version-339286 kubelet[1392]: I1108 09:16:13.956255    1392 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Nov 08 09:16:13 old-k8s-version-339286 kubelet[1392]: I1108 09:16:13.986090    1392 topology_manager.go:215] "Topology Admit Handler" podUID="f0e8ae90-cdf7-445d-8db5-59f7b2d33911" podNamespace="kube-system" podName="coredns-5dd5756b68-88pvx"
	Nov 08 09:16:13 old-k8s-version-339286 kubelet[1392]: I1108 09:16:13.991168    1392 topology_manager.go:215] "Topology Admit Handler" podUID="47335341-42b0-4e22-9609-1d629e34fc56" podNamespace="kube-system" podName="storage-provisioner"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.011677    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcptk\" (UniqueName: \"kubernetes.io/projected/47335341-42b0-4e22-9609-1d629e34fc56-kube-api-access-jcptk\") pod \"storage-provisioner\" (UID: \"47335341-42b0-4e22-9609-1d629e34fc56\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.011746    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-clcxw\" (UniqueName: \"kubernetes.io/projected/f0e8ae90-cdf7-445d-8db5-59f7b2d33911-kube-api-access-clcxw\") pod \"coredns-5dd5756b68-88pvx\" (UID: \"f0e8ae90-cdf7-445d-8db5-59f7b2d33911\") " pod="kube-system/coredns-5dd5756b68-88pvx"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.011786    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/47335341-42b0-4e22-9609-1d629e34fc56-tmp\") pod \"storage-provisioner\" (UID: \"47335341-42b0-4e22-9609-1d629e34fc56\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.011817    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f0e8ae90-cdf7-445d-8db5-59f7b2d33911-config-volume\") pod \"coredns-5dd5756b68-88pvx\" (UID: \"f0e8ae90-cdf7-445d-8db5-59f7b2d33911\") " pod="kube-system/coredns-5dd5756b68-88pvx"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.599790    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-88pvx" podStartSLOduration=13.599734939 podCreationTimestamp="2025-11-08 09:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:14.599575745 +0000 UTC m=+26.265002952" watchObservedRunningTime="2025-11-08 09:16:14.599734939 +0000 UTC m=+26.265162146"
	Nov 08 09:16:14 old-k8s-version-339286 kubelet[1392]: I1108 09:16:14.599915    1392 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.599890967 podCreationTimestamp="2025-11-08 09:16:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:14.566272269 +0000 UTC m=+26.231699474" watchObservedRunningTime="2025-11-08 09:16:14.599890967 +0000 UTC m=+26.265318173"
	Nov 08 09:16:17 old-k8s-version-339286 kubelet[1392]: I1108 09:16:17.415426    1392 topology_manager.go:215] "Topology Admit Handler" podUID="8691aea8-c976-4b06-9771-235555a5cebc" podNamespace="default" podName="busybox"
	Nov 08 09:16:17 old-k8s-version-339286 kubelet[1392]: I1108 09:16:17.437375    1392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcrmr\" (UniqueName: \"kubernetes.io/projected/8691aea8-c976-4b06-9771-235555a5cebc-kube-api-access-bcrmr\") pod \"busybox\" (UID: \"8691aea8-c976-4b06-9771-235555a5cebc\") " pod="default/busybox"
	
	
	==> storage-provisioner [32f2f9934d82cf5ec7d0cd94c9f5d64e764ba4ddb81d0df98db4baccd9ad2daa] <==
	I1108 09:16:14.461198       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:16:14.510720       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:16:14.510791       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:16:14.551131       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c63ab52-f89e-4357-9f41-9364b79d256c", APIVersion:"v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-339286_83397302-e7d8-416d-932b-1bdabe0ac54a became leader
	I1108 09:16:14.551270       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:16:14.555571       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_83397302-e7d8-416d-932b-1bdabe0ac54a!
	I1108 09:16:14.657625       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_83397302-e7d8-416d-932b-1bdabe0ac54a!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-339286 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (2.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (241.340452ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-271910 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context embed-certs-271910 describe deploy/metrics-server -n kube-system: exit status 1 (61.142378ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-271910 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-271910
helpers_test.go:243: (dbg) docker inspect embed-certs-271910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	        "Created": "2025-11-08T09:15:51.304431445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 295095,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:15:51.507550087Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hosts",
	        "LogPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb-json.log",
	        "Name": "/embed-certs-271910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-271910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-271910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	                "LowerDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-271910",
	                "Source": "/var/lib/docker/volumes/embed-certs-271910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-271910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-271910",
	                "name.minikube.sigs.k8s.io": "embed-certs-271910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "9db6f3d817f574ec2044d8fa42af4e7868a0e70828ef0e73ddc1d8c620500161",
	            "SandboxKey": "/var/run/docker/netns/9db6f3d817f5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33099"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33100"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33103"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33101"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33102"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-271910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "be:0b:aa:ce:a7:cf",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea0d0f62e0b24d7b6e90e97450bb9bf7e3ead1e018cb014ae7285578554a529e",
	                    "EndpointID": "16297982f87784ea7fb260640491ceef5d67a7b7b3be58c680afd68f65c051e7",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-271910",
	                        "1bcde2187397"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25: (1.100429094s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-732849 sudo cat /var/lib/kubelet/config.yaml                                                                                                                   │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo docker system info                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cri-dockerd --version                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo containerd config dump                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                          │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:16:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:16:14.619702  302884 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:14.620015  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620022  302884 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:14.620029  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620497  302884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:16:14.621237  302884 out.go:368] Setting JSON to false
	I1108 09:16:14.623593  302884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3526,"bootTime":1762589849,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:16:14.624451  302884 start.go:143] virtualization: kvm guest
	I1108 09:16:14.626457  302884 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:16:14.629520  302884 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:16:14.629524  302884 notify.go:221] Checking for updates...
	I1108 09:16:14.631258  302884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:16:14.632595  302884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:16:14.634002  302884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:16:14.635485  302884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:16:14.636691  302884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:16:14.638679  302884 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638844  302884 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638954  302884 config.go:182] Loaded profile config "old-k8s-version-339286": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:16:14.639063  302884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:16:14.691152  302884 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:16:14.691332  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.813570  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.796199727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.813892  302884 docker.go:319] overlay module found
	I1108 09:16:14.816970  302884 out.go:179] * Using the docker driver based on user configuration
	I1108 09:16:14.818292  302884 start.go:309] selected driver: docker
	I1108 09:16:14.818348  302884 start.go:930] validating driver "docker" against <nil>
	I1108 09:16:14.818374  302884 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:16:14.819199  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.933520  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.916255188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.933793  302884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:16:14.934044  302884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.938665  302884 out.go:179] * Using Docker driver with root privileges
	I1108 09:16:14.940021  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:14.940170  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:14.940249  302884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:16:14.940569  302884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:14.943927  302884 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:16:14.945738  302884 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:16:14.946990  302884 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:16:14.067805  294020 cli_runner.go:164] Run: docker container inspect embed-certs-271910 --format={{.State.Status}}
	I1108 09:16:14.074060  294020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.074085  294020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.074146  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.103402  294020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.103432  294020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.103506  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.108099  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.132496  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.147070  294020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.201882  294020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.237829  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.253009  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.432416  294020 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:14.437859  294020 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271910" to be "Ready" ...
	I1108 09:16:14.957896  294020 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-271910" context rescaled to 1 replicas
	I1108 09:16:14.969443  294020 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:14.948456  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:14.948520  302884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:16:14.948532  302884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:16:14.948688  302884 cache.go:59] Caching tarball of preloaded images
	I1108 09:16:14.949020  302884 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:16:14.949079  302884 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:16:14.949215  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:14.949344  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json: {Name:mk5bfc4db394c708a6042a234b18539bd8dad38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:14.984638  302884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:16:14.984672  302884 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:16:14.984705  302884 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:16:14.984748  302884 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:14.984878  302884 start.go:364] duration metric: took 108.394µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:16:14.984915  302884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:16:14.985006  302884 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:16:10.370669  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	W1108 09:16:12.868173  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	I1108 09:16:14.398457  285556 node_ready.go:49] node "old-k8s-version-339286" is "Ready"
	I1108 09:16:14.398745  285556 node_ready.go:38] duration metric: took 13.534293684s for node "old-k8s-version-339286" to be "Ready" ...
	I1108 09:16:14.398779  285556 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:14.398863  285556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:14.426992  285556 api_server.go:72] duration metric: took 14.046193072s to wait for apiserver process to appear ...
	I1108 09:16:14.427020  285556 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:14.427040  285556 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:16:14.457535  285556 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:16:14.460756  285556 api_server.go:141] control plane version: v1.28.0
	I1108 09:16:14.460783  285556 api_server.go:131] duration metric: took 33.754556ms to wait for apiserver health ...
	I1108 09:16:14.460796  285556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:14.468460  285556 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:14.468503  285556 system_pods.go:61] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.468511  285556 system_pods.go:61] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.468519  285556 system_pods.go:61] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.468524  285556 system_pods.go:61] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.468530  285556 system_pods.go:61] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.468534  285556 system_pods.go:61] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.468539  285556 system_pods.go:61] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.468545  285556 system_pods.go:61] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.468553  285556 system_pods.go:74] duration metric: took 7.750133ms to wait for pod list to return data ...
	I1108 09:16:14.468563  285556 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:14.473761  285556 default_sa.go:45] found service account: "default"
	I1108 09:16:14.473786  285556 default_sa.go:55] duration metric: took 5.215828ms for default service account to be created ...
	I1108 09:16:14.473811  285556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:14.485871  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.485923  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.485932  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.485941  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.485953  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.485970  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.485975  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.485991  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.485998  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.486054  285556 retry.go:31] will retry after 246.902773ms: missing components: kube-dns
	I1108 09:16:14.744570  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.744609  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.744618  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.744627  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.744637  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.744643  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.744648  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.744653  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.744658  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Running
	I1108 09:16:14.744667  285556 system_pods.go:126] duration metric: took 270.849268ms to wait for k8s-apps to be running ...
	I1108 09:16:14.744677  285556 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:14.744731  285556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:14.769258  285556 system_svc.go:56] duration metric: took 24.56978ms WaitForService to wait for kubelet
	I1108 09:16:14.769309  285556 kubeadm.go:587] duration metric: took 14.388514306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.769556  285556 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:14.774712  285556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:14.774739  285556 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:14.774812  285556 node_conditions.go:105] duration metric: took 5.192043ms to run NodePressure ...
	I1108 09:16:14.774830  285556 start.go:242] waiting for startup goroutines ...
	I1108 09:16:14.774881  285556 start.go:247] waiting for cluster config update ...
	I1108 09:16:14.774895  285556 start.go:256] writing updated cluster config ...
	I1108 09:16:14.775329  285556 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:14.780932  285556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:14.790003  285556 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:14.428477  288696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.428494  288696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.428555  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.459240  288696 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.459267  288696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.459355  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.477655  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.497326  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.636260  288696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.677268  288696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.695739  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.805038  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:15.046647  288696 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:15.048786  288696 node_ready.go:35] waiting up to 6m0s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:15.350945  288696 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:15.801076  285556 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:16:15.801161  285556 pod_ready.go:86] duration metric: took 1.011063973s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.805636  285556 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.811600  285556 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.811650  285556 pod_ready.go:86] duration metric: took 5.984998ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.816583  285556 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.823575  285556 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.823606  285556 pod_ready.go:86] duration metric: took 6.946404ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.827507  285556 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.995157  285556 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.995188  285556 pod_ready.go:86] duration metric: took 167.654484ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.194993  285556 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.594916  285556 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:16:16.594953  285556 pod_ready.go:86] duration metric: took 399.929202ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.795274  285556 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194081  285556 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:16:17.194107  285556 pod_ready.go:86] duration metric: took 398.769764ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194123  285556 pod_ready.go:40] duration metric: took 2.41311476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:17.240446  285556 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:16:17.242415  285556 out.go:203] 
	W1108 09:16:17.243926  285556 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:16:17.248943  285556 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:16:17.250772  285556 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:16:15.355429  288696 addons.go:515] duration metric: took 994.950876ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:16:15.554093  288696 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-220714" context rescaled to 1 replicas
	W1108 09:16:17.051497  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:14.970722  294020 addons.go:515] duration metric: took 934.784036ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:16:16.442258  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:14.988644  302884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:16:14.988941  302884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:14.988979  302884 client.go:173] LocalClient.Create starting
	I1108 09:16:14.989121  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:16:14.989164  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989194  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989303  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:16:14.989337  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989349  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989787  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:16:15.020585  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:16:15.020664  302884 network_create.go:284] running [docker network inspect default-k8s-diff-port-677902] to gather additional debugging logs...
	I1108 09:16:15.020681  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902
	W1108 09:16:15.047609  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 returned with exit code 1
	I1108 09:16:15.047686  302884 network_create.go:287] error running [docker network inspect default-k8s-diff-port-677902]: docker network inspect default-k8s-diff-port-677902: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-677902 not found
	I1108 09:16:15.047745  302884 network_create.go:289] output of [docker network inspect default-k8s-diff-port-677902]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-677902 not found
	
	** /stderr **
	I1108 09:16:15.048043  302884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:15.076013  302884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:16:15.076913  302884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:16:15.077960  302884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:16:15.079133  302884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec8b10}
	I1108 09:16:15.079166  302884 network_create.go:124] attempt to create docker network default-k8s-diff-port-677902 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:16:15.079219  302884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 default-k8s-diff-port-677902
	I1108 09:16:15.171652  302884 network_create.go:108] docker network default-k8s-diff-port-677902 192.168.76.0/24 created
	I1108 09:16:15.171687  302884 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-677902" container
	I1108 09:16:15.171753  302884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:16:15.199943  302884 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-677902 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:16:15.225618  302884 oci.go:103] Successfully created a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.225772  302884 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-677902-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --entrypoint /usr/bin/test -v default-k8s-diff-port-677902:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:16:15.866328  302884 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.866376  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:15.866401  302884 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:16:15.866471  302884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:16:19.052301  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:21.552514  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:20.584332  302884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.717760526s)
	I1108 09:16:20.584367  302884 kic.go:203] duration metric: took 4.717962939s to extract preloaded images to volume ...
	W1108 09:16:20.584469  302884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:16:20.584509  302884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:16:20.584562  302884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:16:20.649658  302884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-677902 --name default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --network default-k8s-diff-port-677902 --ip 192.168.76.2 --volume default-k8s-diff-port-677902:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:16:20.985463  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Running}}
	I1108 09:16:21.005078  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.023858  302884 cli_runner.go:164] Run: docker exec default-k8s-diff-port-677902 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:16:21.072397  302884 oci.go:144] the created container "default-k8s-diff-port-677902" has a running status.
	I1108 09:16:21.072432  302884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa...
	I1108 09:16:21.328004  302884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:16:21.358901  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.381864  302884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:16:21.381926  302884 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-677902 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:16:21.429674  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.450173  302884 machine.go:94] provisionDockerMachine start ...
	I1108 09:16:21.450256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.471253  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.471544  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.471559  302884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:16:21.604466  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.604500  302884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:16:21.604558  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.625801  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.626035  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.626052  302884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:16:21.767180  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.767256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.786052  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.786341  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.786363  302884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:16:21.917181  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:16:21.917219  302884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:16:21.917239  302884 ubuntu.go:190] setting up certificates
	I1108 09:16:21.917247  302884 provision.go:84] configureAuth start
	I1108 09:16:21.917317  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:21.935307  302884 provision.go:143] copyHostCerts
	I1108 09:16:21.935370  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:16:21.935382  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:16:21.935449  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:16:21.935553  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:16:21.935562  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:16:21.935591  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:16:21.935701  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:16:21.935713  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:16:21.935739  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:16:21.935803  302884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:16:22.042345  302884 provision.go:177] copyRemoteCerts
	I1108 09:16:22.042398  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:16:22.042450  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.062501  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.156803  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:16:22.176432  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:16:22.194210  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:16:22.212199  302884 provision.go:87] duration metric: took 294.93803ms to configureAuth
	I1108 09:16:22.212230  302884 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:16:22.212437  302884 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:22.212551  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.231181  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:22.231443  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:22.231463  302884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:16:22.470271  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:16:22.470308  302884 machine.go:97] duration metric: took 1.020112912s to provisionDockerMachine
	I1108 09:16:22.470320  302884 client.go:176] duration metric: took 7.481335007s to LocalClient.Create
	I1108 09:16:22.470341  302884 start.go:167] duration metric: took 7.481404005s to libmachine.API.Create "default-k8s-diff-port-677902"
	I1108 09:16:22.470350  302884 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:22.470362  302884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:16:22.470433  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:16:22.470471  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.490818  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.586821  302884 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:16:22.590810  302884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:16:22.590839  302884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:16:22.590852  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:16:22.591149  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:16:22.591343  302884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:16:22.591507  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:16:22.600330  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:22.620675  302884 start.go:296] duration metric: took 150.312864ms for postStartSetup
	I1108 09:16:22.621005  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.638917  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:22.639195  302884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:16:22.639233  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.658713  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.750655  302884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:16:22.755273  302884 start.go:128] duration metric: took 7.770253809s to createHost
	I1108 09:16:22.755312  302884 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 7.770414218s
	I1108 09:16:22.755394  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.773899  302884 ssh_runner.go:195] Run: cat /version.json
	I1108 09:16:22.773917  302884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:16:22.773948  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.773974  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.794752  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.795127  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.889663  302884 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:22.942216  302884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:16:22.977581  302884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:16:22.982348  302884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:16:22.982411  302884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:16:23.008837  302884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:16:23.008860  302884 start.go:496] detecting cgroup driver to use...
	I1108 09:16:23.008896  302884 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:16:23.008949  302884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:16:23.025177  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:16:23.037624  302884 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:16:23.037681  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:16:23.054660  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:16:23.073210  302884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:16:23.155568  302884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:16:23.244179  302884 docker.go:234] disabling docker service ...
	I1108 09:16:23.244249  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:16:23.263226  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:16:23.276679  302884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:16:23.369719  302884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:16:23.452958  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:16:23.465534  302884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:16:23.480351  302884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:16:23.480429  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.490576  302884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:16:23.490636  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.499772  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.508365  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.517456  302884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:16:23.525954  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.535277  302884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.549170  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.558258  302884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:16:23.565676  302884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:16:23.573369  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:23.653541  302884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:16:23.767673  302884 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:16:23.767729  302884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:16:23.771780  302884 start.go:564] Will wait 60s for crictl version
	I1108 09:16:23.771829  302884 ssh_runner.go:195] Run: which crictl
	I1108 09:16:23.775330  302884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:16:23.799928  302884 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:16:23.800010  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.827743  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.857164  302884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:16:18.941803  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:20.942622  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:23.441685  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:23.858390  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:23.875734  302884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:16:23.879850  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:23.890489  302884 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:16:23.890611  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:23.890671  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.922889  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.922910  302884 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:16:23.922950  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.948186  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.948207  302884 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:16:23.948214  302884 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:16:23.948333  302884 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:16:23.948416  302884 ssh_runner.go:195] Run: crio config
	I1108 09:16:23.994577  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:23.994603  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:23.994707  302884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:16:23.994758  302884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:16:23.994909  302884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:16:23.994977  302884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:16:24.003550  302884 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:16:24.003613  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:16:24.011668  302884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:16:24.025570  302884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:16:24.040656  302884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:16:24.053685  302884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:16:24.057813  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:24.068090  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:24.153388  302884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:24.180756  302884 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:16:24.180778  302884 certs.go:195] generating shared ca certs ...
	I1108 09:16:24.180792  302884 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.180962  302884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:16:24.181003  302884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:16:24.181013  302884 certs.go:257] generating profile certs ...
	I1108 09:16:24.181084  302884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:16:24.181110  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt with IP's: []
	I1108 09:16:24.249417  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt ...
	I1108 09:16:24.249443  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt: {Name:mkb0424a7b2244acd4c9b08e8fd3832ca89c8cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249643  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key ...
	I1108 09:16:24.249660  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key: {Name:mk98228a5537d26558a0a8aa80142320b934942d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249773  302884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:16:24.249793  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:16:24.369815  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 ...
	I1108 09:16:24.369843  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273: {Name:mkfff96a8818db7317888f2704b4dce1877844fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370020  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 ...
	I1108 09:16:24.370036  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273: {Name:mkd7e2641bb265c1b14bb815272c25677391281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370138  302884 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt
	I1108 09:16:24.370218  302884 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key
	I1108 09:16:24.370275  302884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:16:24.370302  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt with IP's: []
	I1108 09:16:24.474350  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt ...
	I1108 09:16:24.474381  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt: {Name:mk129990eb5be69a3128d0b5b94ee200eae7c775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474565  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key ...
	I1108 09:16:24.474588  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key: {Name:mk588b95436fa4f4c5adaa76c8236e776fdef198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474803  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:16:24.474841  302884 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:16:24.474852  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:16:24.474873  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:16:24.474894  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:16:24.474915  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:16:24.474951  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:24.475489  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:16:24.494518  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:16:24.512401  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:16:24.530678  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:16:24.548124  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:16:24.566472  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:16:24.584982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:16:24.603982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	W1108 09:16:24.051828  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:26.552224  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:27.551990  288696 node_ready.go:49] node "no-preload-220714" is "Ready"
	I1108 09:16:27.552021  288696 node_ready.go:38] duration metric: took 12.503203095s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:27.552043  288696 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:27.552094  288696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:27.567072  288696 api_server.go:72] duration metric: took 13.20624104s to wait for apiserver process to appear ...
	I1108 09:16:27.567097  288696 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:27.567115  288696 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1108 09:16:27.571234  288696 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1108 09:16:27.572225  288696 api_server.go:141] control plane version: v1.34.1
	I1108 09:16:27.572252  288696 api_server.go:131] duration metric: took 5.147393ms to wait for apiserver health ...
	I1108 09:16:27.572262  288696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:27.575571  288696 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:27.575606  288696 system_pods.go:61] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.575613  288696 system_pods.go:61] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.575621  288696 system_pods.go:61] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.575627  288696 system_pods.go:61] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.575636  288696 system_pods.go:61] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.575642  288696 system_pods.go:61] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.575649  288696 system_pods.go:61] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.575656  288696 system_pods.go:61] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.575667  288696 system_pods.go:74] duration metric: took 3.395544ms to wait for pod list to return data ...
	I1108 09:16:27.575676  288696 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:27.578421  288696 default_sa.go:45] found service account: "default"
	I1108 09:16:27.578442  288696 default_sa.go:55] duration metric: took 2.756827ms for default service account to be created ...
	I1108 09:16:27.578453  288696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:27.581851  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.581882  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.581890  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.581898  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.581904  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.581909  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.581914  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.581918  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.581925  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.582377  288696 retry.go:31] will retry after 309.619866ms: missing components: kube-dns
	I1108 09:16:27.897123  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.897166  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.897176  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.897183  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.897189  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.897196  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.897201  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.897206  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.897213  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.897230  288696 retry.go:31] will retry after 292.226039ms: missing components: kube-dns
	W1108 09:16:25.442185  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:26.441536  294020 node_ready.go:49] node "embed-certs-271910" is "Ready"
	I1108 09:16:26.441573  294020 node_ready.go:38] duration metric: took 12.003041862s for node "embed-certs-271910" to be "Ready" ...
	I1108 09:16:26.441586  294020 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:26.441646  294020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:26.454331  294020 api_server.go:72] duration metric: took 12.418379921s to wait for apiserver process to appear ...
	I1108 09:16:26.454357  294020 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:26.454382  294020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:16:26.458665  294020 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 09:16:26.459882  294020 api_server.go:141] control plane version: v1.34.1
	I1108 09:16:26.459909  294020 api_server.go:131] duration metric: took 5.544789ms to wait for apiserver health ...
	I1108 09:16:26.459925  294020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:26.463219  294020 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:26.463256  294020 system_pods.go:61] "coredns-66bc5c9577-cbw4j" [b1a3271b-2b58-460a-98e7-29636a0c2860] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:26.463263  294020 system_pods.go:61] "etcd-embed-certs-271910" [5ce2f3f4-0806-4e34-a0fc-82eb8ddedc8f] Running
	I1108 09:16:26.463270  294020 system_pods.go:61] "kindnet-49l78" [bb346bcf-44a7-4255-a33c-fdb05b6193f2] Running
	I1108 09:16:26.463276  294020 system_pods.go:61] "kube-apiserver-embed-certs-271910" [ed4f4bb9-d9c7-4258-b20d-8f6d8a3c2efa] Running
	I1108 09:16:26.463300  294020 system_pods.go:61] "kube-controller-manager-embed-certs-271910" [7f2587b6-bd76-413d-966a-01f8dc17858f] Running
	I1108 09:16:26.463306  294020 system_pods.go:61] "kube-proxy-lwbl6" [8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c] Running
	I1108 09:16:26.463315  294020 system_pods.go:61] "kube-scheduler-embed-certs-271910" [026e9843-832c-4e8e-8a26-831b5eaede98] Running
	I1108 09:16:26.463320  294020 system_pods.go:61] "storage-provisioner" [69b5b176-edf7-4eda-82be-7e9980c13459] Running
	I1108 09:16:26.463326  294020 system_pods.go:74] duration metric: took 3.393092ms to wait for pod list to return data ...
	I1108 09:16:26.463335  294020 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:26.465623  294020 default_sa.go:45] found service account: "default"
	I1108 09:16:26.465643  294020 default_sa.go:55] duration metric: took 2.299772ms for default service account to be created ...
	I1108 09:16:26.465652  294020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:26.468371  294020 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:26.468405  294020 system_pods.go:89] "coredns-66bc5c9577-cbw4j" [b1a3271b-2b58-460a-98e7-29636a0c2860] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:26.468415  294020 system_pods.go:89] "etcd-embed-certs-271910" [5ce2f3f4-0806-4e34-a0fc-82eb8ddedc8f] Running
	I1108 09:16:26.468422  294020 system_pods.go:89] "kindnet-49l78" [bb346bcf-44a7-4255-a33c-fdb05b6193f2] Running
	I1108 09:16:26.468428  294020 system_pods.go:89] "kube-apiserver-embed-certs-271910" [ed4f4bb9-d9c7-4258-b20d-8f6d8a3c2efa] Running
	I1108 09:16:26.468434  294020 system_pods.go:89] "kube-controller-manager-embed-certs-271910" [7f2587b6-bd76-413d-966a-01f8dc17858f] Running
	I1108 09:16:26.468440  294020 system_pods.go:89] "kube-proxy-lwbl6" [8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c] Running
	I1108 09:16:26.468446  294020 system_pods.go:89] "kube-scheduler-embed-certs-271910" [026e9843-832c-4e8e-8a26-831b5eaede98] Running
	I1108 09:16:26.468454  294020 system_pods.go:89] "storage-provisioner" [69b5b176-edf7-4eda-82be-7e9980c13459] Running
	I1108 09:16:26.468463  294020 system_pods.go:126] duration metric: took 2.804388ms to wait for k8s-apps to be running ...
	I1108 09:16:26.468475  294020 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:26.468534  294020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:26.482166  294020 system_svc.go:56] duration metric: took 13.682703ms WaitForService to wait for kubelet
	I1108 09:16:26.482193  294020 kubeadm.go:587] duration metric: took 12.446246908s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:26.482214  294020 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:26.485327  294020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:26.485356  294020 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:26.485372  294020 node_conditions.go:105] duration metric: took 3.153381ms to run NodePressure ...
	I1108 09:16:26.485386  294020 start.go:242] waiting for startup goroutines ...
	I1108 09:16:26.485396  294020 start.go:247] waiting for cluster config update ...
	I1108 09:16:26.485411  294020 start.go:256] writing updated cluster config ...
	I1108 09:16:26.485699  294020 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:26.489800  294020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:26.493546  294020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.499143  294020 pod_ready.go:94] pod "coredns-66bc5c9577-cbw4j" is "Ready"
	I1108 09:16:27.499173  294020 pod_ready.go:86] duration metric: took 1.005603354s for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.501546  294020 pod_ready.go:83] waiting for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.507048  294020 pod_ready.go:94] pod "etcd-embed-certs-271910" is "Ready"
	I1108 09:16:27.507073  294020 pod_ready.go:86] duration metric: took 5.504922ms for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.509054  294020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.512694  294020 pod_ready.go:94] pod "kube-apiserver-embed-certs-271910" is "Ready"
	I1108 09:16:27.512715  294020 pod_ready.go:86] duration metric: took 3.646ms for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.514487  294020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.697453  294020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-271910" is "Ready"
	I1108 09:16:27.697476  294020 pod_ready.go:86] duration metric: took 182.972054ms for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.898149  294020 pod_ready.go:83] waiting for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.297629  294020 pod_ready.go:94] pod "kube-proxy-lwbl6" is "Ready"
	I1108 09:16:28.297663  294020 pod_ready.go:86] duration metric: took 399.483472ms for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.497998  294020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.897338  294020 pod_ready.go:94] pod "kube-scheduler-embed-certs-271910" is "Ready"
	I1108 09:16:28.897364  294020 pod_ready.go:86] duration metric: took 399.337987ms for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.897376  294020 pod_ready.go:40] duration metric: took 2.407548053s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:28.950786  294020 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:16:28.952604  294020 out.go:179] * Done! kubectl is now configured to use "embed-certs-271910" cluster and "default" namespace by default
	I1108 09:16:24.622161  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:16:24.642050  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:16:24.660239  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:16:24.678050  302884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:16:24.691686  302884 ssh_runner.go:195] Run: openssl version
	I1108 09:16:24.697945  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:16:24.707064  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.711018  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.711107  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.746715  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:16:24.755710  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:16:24.764114  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.767998  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.768047  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.802977  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:16:24.811920  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:16:24.820490  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.824538  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.824586  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.859077  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:16:24.868630  302884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:16:24.872519  302884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:16:24.872569  302884 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:24.872624  302884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:24.872677  302884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:24.900788  302884 cri.go:89] found id: ""
	I1108 09:16:24.900863  302884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:16:24.909357  302884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:16:24.917330  302884 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:16:24.917379  302884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:16:24.925073  302884 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:16:24.925089  302884 kubeadm.go:158] found existing configuration files:
	
	I1108 09:16:24.925129  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 09:16:24.933049  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:16:24.933102  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:16:24.940684  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 09:16:24.948512  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:16:24.948569  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:16:24.955672  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 09:16:24.963146  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:16:24.963196  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:16:24.970559  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 09:16:24.978321  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:16:24.978370  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:16:24.985648  302884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:16:25.048029  302884 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:16:25.112944  302884 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:16:28.193963  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.194002  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:28.194010  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.194016  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.194020  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.194024  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.194027  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.194029  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.194034  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:28.194082  288696 retry.go:31] will retry after 382.783963ms: missing components: kube-dns
	I1108 09:16:28.581516  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.581565  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:28.581575  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.581583  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.581589  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.581595  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.581600  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.581605  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.581620  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:28.581636  288696 retry.go:31] will retry after 411.561067ms: missing components: kube-dns
	I1108 09:16:28.997583  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.997612  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Running
	I1108 09:16:28.997617  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.997621  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.997624  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.997628  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.997631  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.997634  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.997637  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Running
	I1108 09:16:28.997643  288696 system_pods.go:126] duration metric: took 1.419185057s to wait for k8s-apps to be running ...
	I1108 09:16:28.997650  288696 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:28.997696  288696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:29.013585  288696 system_svc.go:56] duration metric: took 15.92533ms WaitForService to wait for kubelet
	I1108 09:16:29.013619  288696 kubeadm.go:587] duration metric: took 14.652790412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:29.013642  288696 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:29.016750  288696 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:29.016779  288696 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:29.016795  288696 node_conditions.go:105] duration metric: took 3.145779ms to run NodePressure ...
	I1108 09:16:29.016808  288696 start.go:242] waiting for startup goroutines ...
	I1108 09:16:29.016819  288696 start.go:247] waiting for cluster config update ...
	I1108 09:16:29.016856  288696 start.go:256] writing updated cluster config ...
	I1108 09:16:29.017134  288696 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:29.023264  288696 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:29.027422  288696 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.032160  288696 pod_ready.go:94] pod "coredns-66bc5c9577-zdb97" is "Ready"
	I1108 09:16:29.032183  288696 pod_ready.go:86] duration metric: took 4.738073ms for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.034406  288696 pod_ready.go:83] waiting for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.038508  288696 pod_ready.go:94] pod "etcd-no-preload-220714" is "Ready"
	I1108 09:16:29.038530  288696 pod_ready.go:86] duration metric: took 4.10382ms for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.040573  288696 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.044618  288696 pod_ready.go:94] pod "kube-apiserver-no-preload-220714" is "Ready"
	I1108 09:16:29.044639  288696 pod_ready.go:86] duration metric: took 4.044363ms for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.046698  288696 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.428886  288696 pod_ready.go:94] pod "kube-controller-manager-no-preload-220714" is "Ready"
	I1108 09:16:29.428927  288696 pod_ready.go:86] duration metric: took 382.210796ms for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.628632  288696 pod_ready.go:83] waiting for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.028531  288696 pod_ready.go:94] pod "kube-proxy-66cm9" is "Ready"
	I1108 09:16:30.028564  288696 pod_ready.go:86] duration metric: took 399.908302ms for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.227891  288696 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.628163  288696 pod_ready.go:94] pod "kube-scheduler-no-preload-220714" is "Ready"
	I1108 09:16:30.628191  288696 pod_ready.go:86] duration metric: took 400.274382ms for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.628205  288696 pod_ready.go:40] duration metric: took 1.604903677s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:30.675012  288696 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:16:30.677007  288696 out.go:179] * Done! kubectl is now configured to use "no-preload-220714" cluster and "default" namespace by default
	I1108 09:16:35.120895  302884 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:16:35.121004  302884 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:16:35.121175  302884 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:16:35.121292  302884 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:16:35.121353  302884 kubeadm.go:319] OS: Linux
	I1108 09:16:35.121435  302884 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:16:35.121506  302884 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:16:35.121565  302884 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:16:35.121638  302884 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:16:35.121724  302884 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:16:35.121806  302884 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:16:35.121887  302884 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:16:35.121964  302884 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:16:35.122058  302884 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:16:35.122184  302884 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:16:35.122330  302884 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:16:35.122408  302884 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:16:35.124893  302884 out.go:252]   - Generating certificates and keys ...
	I1108 09:16:35.124995  302884 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:16:35.125121  302884 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:16:35.125214  302884 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:16:35.125342  302884 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:16:35.125426  302884 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:16:35.125502  302884 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:16:35.125608  302884 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:16:35.125772  302884 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-677902 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:16:35.125840  302884 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:16:35.125968  302884 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-677902 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:16:35.126073  302884 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:16:35.126170  302884 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:16:35.126238  302884 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:16:35.126344  302884 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:16:35.126420  302884 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:16:35.126498  302884 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:16:35.126572  302884 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:16:35.126677  302884 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:16:35.126758  302884 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:16:35.126870  302884 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:16:35.126956  302884 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:16:35.128406  302884 out.go:252]   - Booting up control plane ...
	I1108 09:16:35.128525  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:16:35.128638  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:16:35.128733  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:16:35.128898  302884 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:16:35.128981  302884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:16:35.129074  302884 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:16:35.129147  302884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:16:35.129182  302884 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:16:35.129349  302884 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:16:35.129440  302884 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:16:35.129495  302884 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001760288s
	I1108 09:16:35.129587  302884 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:16:35.129669  302884 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1108 09:16:35.129744  302884 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:16:35.129825  302884 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:16:35.129905  302884 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504019821s
	I1108 09:16:35.129979  302884 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.960672888s
	I1108 09:16:35.130064  302884 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501227376s
	I1108 09:16:35.130235  302884 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:16:35.130443  302884 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:16:35.130523  302884 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:16:35.130788  302884 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-677902 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:16:35.130879  302884 kubeadm.go:319] [bootstrap-token] Using token: o1hqaz.w0k7ft9j12ywfau7
	I1108 09:16:35.132386  302884 out.go:252]   - Configuring RBAC rules ...
	I1108 09:16:35.132551  302884 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:16:35.132650  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:16:35.132870  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:16:35.133032  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:16:35.133201  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:16:35.133336  302884 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:16:35.133438  302884 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:16:35.133475  302884 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:16:35.133518  302884 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:16:35.133524  302884 kubeadm.go:319] 
	I1108 09:16:35.133583  302884 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:16:35.133600  302884 kubeadm.go:319] 
	I1108 09:16:35.133719  302884 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:16:35.133728  302884 kubeadm.go:319] 
	I1108 09:16:35.133770  302884 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:16:35.133844  302884 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:16:35.133922  302884 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:16:35.133932  302884 kubeadm.go:319] 
	I1108 09:16:35.134014  302884 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:16:35.134024  302884 kubeadm.go:319] 
	I1108 09:16:35.134080  302884 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:16:35.134089  302884 kubeadm.go:319] 
	I1108 09:16:35.134129  302884 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:16:35.134191  302884 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:16:35.134247  302884 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:16:35.134252  302884 kubeadm.go:319] 
	I1108 09:16:35.134366  302884 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:16:35.134429  302884 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:16:35.134435  302884 kubeadm.go:319] 
	I1108 09:16:35.134518  302884 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token o1hqaz.w0k7ft9j12ywfau7 \
	I1108 09:16:35.134671  302884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 09:16:35.134700  302884 kubeadm.go:319] 	--control-plane 
	I1108 09:16:35.134706  302884 kubeadm.go:319] 
	I1108 09:16:35.134797  302884 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:16:35.134806  302884 kubeadm.go:319] 
	I1108 09:16:35.134911  302884 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token o1hqaz.w0k7ft9j12ywfau7 \
	I1108 09:16:35.135094  302884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 09:16:35.135113  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:35.135121  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:35.136736  302884 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 08 09:16:26 embed-certs-271910 crio[779]: time="2025-11-08T09:16:26.336652198Z" level=info msg="Started container" PID=1803 containerID=4e9b5816a2417e759fdbb105c13106eef2c4153674c12aef5f741e0beaa62093 description=kube-system/coredns-66bc5c9577-cbw4j/coredns id=bddca02b-e9df-4726-a6fa-187ea8c1f90e name=/runtime.v1.RuntimeService/StartContainer sandboxID=3b25a8b7a63bff71449542f23d6b5a805b8cc40fff5d2e704548e8c25a9b7fca
	Nov 08 09:16:26 embed-certs-271910 crio[779]: time="2025-11-08T09:16:26.337200411Z" level=info msg="Started container" PID=1802 containerID=a91252af9f98db2e585e60eda3d9df0f8fd9a2ff9872051fa184d78f22ef63fe description=kube-system/storage-provisioner/storage-provisioner id=0e9136ea-c516-4fbd-899a-639eb3f4a2fb name=/runtime.v1.RuntimeService/StartContainer sandboxID=95c5c0feb383149046aff93cd1b979978a42deaee8e7c88f76107463c25c21bc
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.463409762Z" level=info msg="Running pod sandbox: default/busybox/POD" id=c23be063-224a-41ca-a406-e37f70a924be name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.463507125Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.4685542Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3b39292475edc219bc93cd7f948aac7d34cdc97909d55cf1aa4486ebad70577 UID:be77aed3-863e-433b-85af-7850d4a6cecd NetNS:/var/run/netns/6c9cb632-c4a0-458f-acc6-5ba79036635f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007125c8}] Aliases:map[]}"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.4685882Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.478971214Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:f3b39292475edc219bc93cd7f948aac7d34cdc97909d55cf1aa4486ebad70577 UID:be77aed3-863e-433b-85af-7850d4a6cecd NetNS:/var/run/netns/6c9cb632-c4a0-458f-acc6-5ba79036635f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0007125c8}] Aliases:map[]}"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.479085236Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.479862272Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.480985553Z" level=info msg="Ran pod sandbox f3b39292475edc219bc93cd7f948aac7d34cdc97909d55cf1aa4486ebad70577 with infra container: default/busybox/POD" id=c23be063-224a-41ca-a406-e37f70a924be name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.482358308Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=6a604457-0304-4336-abbf-94bde096b8ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.482492885Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=6a604457-0304-4336-abbf-94bde096b8ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.482552845Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=6a604457-0304-4336-abbf-94bde096b8ad name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.483348906Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=29024e49-d9c5-4be2-8041-ca270b8efa19 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:29 embed-certs-271910 crio[779]: time="2025-11-08T09:16:29.485499725Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.908634955Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=29024e49-d9c5-4be2-8041-ca270b8efa19 name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.909511397Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=173d32ea-0b92-4bfe-bafc-87fea9891450 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.910893573Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=957470d2-e8e4-44a8-9293-f2293662d9df name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.914590685Z" level=info msg="Creating container: default/busybox/busybox" id=84c3cf6a-e2f8-4328-b8f2-23c3bbb4b5d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.914721266Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.918498249Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.918884282Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.948880977Z" level=info msg="Created container 6b89387dea1b030c79c3f7f3068ae0b0e93bf7b070b39a9bcc958bcb030a78b1: default/busybox/busybox" id=84c3cf6a-e2f8-4328-b8f2-23c3bbb4b5d5 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.949775589Z" level=info msg="Starting container: 6b89387dea1b030c79c3f7f3068ae0b0e93bf7b070b39a9bcc958bcb030a78b1" id=838a967b-9956-450a-a12d-ad30e27b6464 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:30 embed-certs-271910 crio[779]: time="2025-11-08T09:16:30.951740361Z" level=info msg="Started container" PID=1880 containerID=6b89387dea1b030c79c3f7f3068ae0b0e93bf7b070b39a9bcc958bcb030a78b1 description=default/busybox/busybox id=838a967b-9956-450a-a12d-ad30e27b6464 name=/runtime.v1.RuntimeService/StartContainer sandboxID=f3b39292475edc219bc93cd7f948aac7d34cdc97909d55cf1aa4486ebad70577
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                          NAMESPACE
	6b89387dea1b0       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   f3b39292475ed       busybox                                      default
	4e9b5816a2417       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   3b25a8b7a63bf       coredns-66bc5c9577-cbw4j                     kube-system
	a91252af9f98d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   95c5c0feb3831       storage-provisioner                          kube-system
	3f99a71dc47e9       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   d95382ba16e1f       kindnet-49l78                                kube-system
	0e108256a9974       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   66f7bf4683ce2       kube-proxy-lwbl6                             kube-system
	ac1abe71c317b       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      34 seconds ago      Running             kube-apiserver            0                   76c2d2db9a423       kube-apiserver-embed-certs-271910            kube-system
	77f184bc8fd93       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      34 seconds ago      Running             etcd                      0                   def57461b7fbe       etcd-embed-certs-271910                      kube-system
	563ac7ba32e79       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      34 seconds ago      Running             kube-controller-manager   0                   8ddbff2556ea3       kube-controller-manager-embed-certs-271910   kube-system
	d1f7cef42ad29       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      34 seconds ago      Running             kube-scheduler            0                   a3e8ac4f8024b       kube-scheduler-embed-certs-271910            kube-system
	
	
	==> coredns [4e9b5816a2417e759fdbb105c13106eef2c4153674c12aef5f741e0beaa62093] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:54842 - 33045 "HINFO IN 8079443803948424131.9067971357121238285. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.020272916s
	
	
	==> describe nodes <==
	Name:               embed-certs-271910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-271910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-271910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-271910
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:16:29 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:16:25 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:16:25 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:16:25 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:16:25 +0000   Sat, 08 Nov 2025 09:16:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-271910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5a4dbec0-6466-4d25-92b6-8bbd4bdc538c
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-cbw4j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-embed-certs-271910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         29s
	  kube-system                 kindnet-49l78                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-embed-certs-271910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-controller-manager-embed-certs-271910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 kube-proxy-lwbl6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-embed-certs-271910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         29s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  35s (x8 over 36s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    35s (x8 over 36s)  kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     35s (x8 over 36s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  Starting                 29s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  29s                kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    29s                kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29s                kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node embed-certs-271910 event: Registered Node embed-certs-271910 in Controller
	  Normal  NodeReady                13s                kubelet          Node embed-certs-271910 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [77f184bc8fd93631998de515653d136639ca62af57c1c0080908b8e7aaa06878] <==
	{"level":"warn","ts":"2025-11-08T09:16:05.649826Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.659491Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.668090Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50614","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.676562Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.685438Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50652","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.696118Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50678","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.704047Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50702","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.711428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50716","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.720563Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50764","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.738116Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50780","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.746889Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50798","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.756042Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50808","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.767221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.776102Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50830","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.787141Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50848","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.800169Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50872","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.810634Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50896","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.818018Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50910","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.826418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50928","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.834632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.842500Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.854484Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50980","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.869632Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.875493Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.951503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:51038","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:16:38 up 59 min,  0 user,  load average: 5.16, 3.89, 2.45
	Linux embed-certs-271910 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [3f99a71dc47e93b4869bc3175455eff1b7038b3b590c5b76c409326930a00212] <==
	I1108 09:16:15.298962       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:15.299503       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:16:15.299734       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:15.299779       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:15.299813       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:15Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:15.596932       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:15.596991       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:15.597005       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:15.695186       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:15.997395       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:15.997667       1 metrics.go:72] Registering metrics
	I1108 09:16:15.997780       1 controller.go:711] "Syncing nftables rules"
	I1108 09:16:25.596627       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:16:25.596683       1 main.go:301] handling current node
	I1108 09:16:35.597779       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:16:35.597835       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ac1abe71c317b8d4386832f17f68a4ad0a7f0ada59cb1163c79b7938701fbe90] <==
	I1108 09:16:06.569579       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:16:06.573532       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:16:06.573643       1 default_servicecidr_controller.go:166] Creating default ServiceCIDR with CIDRs: [10.96.0.0/12]
	I1108 09:16:06.577839       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:16:06.585018       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:16:06.586439       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:06.595932       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:16:07.473175       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:16:07.477444       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:16:07.477463       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:16:08.077114       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:16:08.117738       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:16:08.178995       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:16:08.185416       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.85.2]
	I1108 09:16:08.186575       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:16:08.192633       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:16:08.522113       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:16:09.445237       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:16:09.457001       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:16:09.469389       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:16:14.220711       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:16:14.530597       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 09:16:14.612225       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:14.643430       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	E1108 09:16:37.246753       1 conn.go:339] Error on socket receive: read tcp 192.168.85.2:8443->192.168.85.1:53070: use of closed network connection
	
	
	==> kube-controller-manager [563ac7ba32e791defc98f8a9f90d157c57dd0edd05cb568bf0dfe31083c94450] <==
	I1108 09:16:13.503470       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:16:13.505599       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:16:13.505708       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:16:13.505771       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="embed-certs-271910"
	I1108 09:16:13.505894       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:16:13.514207       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:16:13.515492       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:16:13.516301       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="embed-certs-271910" podCIDRs=["10.244.0.0/24"]
	I1108 09:16:13.516355       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:16:13.516599       1 shared_informer.go:356] "Caches are synced" controller="cronjob"
	I1108 09:16:13.516624       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:16:13.516652       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:16:13.516665       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:16:13.516734       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:16:13.517173       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:16:13.517205       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:16:13.517454       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:16:13.517506       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:16:13.520781       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:16:13.522016       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:16:13.523377       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:16:13.529518       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:16:13.529543       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:16:13.550224       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:16:28.508060       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [0e108256a99742fdd8919e388367c9f5bf2badddf76c50063fd4bff5261ab1be] <==
	I1108 09:16:15.055956       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:16:15.140328       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:16:15.240471       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:16:15.240512       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:16:15.240858       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:16:15.288965       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:15.289027       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:16:15.296848       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:16:15.297903       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:16:15.298173       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:15.301537       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:16:15.301583       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:16:15.301648       1 config.go:200] "Starting service config controller"
	I1108 09:16:15.301662       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:16:15.301816       1 config.go:309] "Starting node config controller"
	I1108 09:16:15.301897       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:16:15.302057       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:16:15.302098       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:16:15.401751       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:16:15.401750       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:16:15.402606       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:16:15.402673       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [d1f7cef42ad29a585860fa0622789d20b38ad9e648dc7ba1a1d302371fdd5e6d] <==
	E1108 09:16:06.526728       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:16:06.526740       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:16:06.526836       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:06.526883       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:16:06.526887       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:06.526943       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:16:07.340058       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:16:07.340055       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:16:07.348744       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:16:07.384331       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:16:07.436079       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:16:07.494410       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:07.518714       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:16:07.525976       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:16:07.534248       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:16:07.546742       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:07.651593       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:16:07.677961       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:16:07.719251       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:07.736951       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:16:07.770462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:16:07.803768       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:16:07.821547       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:07.832052       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	I1108 09:16:10.722814       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:16:10 embed-certs-271910 kubelet[1304]: I1108 09:16:10.374391    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-embed-certs-271910" podStartSLOduration=1.374368696 podStartE2EDuration="1.374368696s" podCreationTimestamp="2025-11-08 09:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:10.359408574 +0000 UTC m=+1.160134987" watchObservedRunningTime="2025-11-08 09:16:10.374368696 +0000 UTC m=+1.175095114"
	Nov 08 09:16:10 embed-certs-271910 kubelet[1304]: I1108 09:16:10.384831    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-embed-certs-271910" podStartSLOduration=1.3848001970000001 podStartE2EDuration="1.384800197s" podCreationTimestamp="2025-11-08 09:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:10.374901122 +0000 UTC m=+1.175627515" watchObservedRunningTime="2025-11-08 09:16:10.384800197 +0000 UTC m=+1.185526610"
	Nov 08 09:16:10 embed-certs-271910 kubelet[1304]: I1108 09:16:10.398978    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-embed-certs-271910" podStartSLOduration=1.3989570900000001 podStartE2EDuration="1.39895709s" podCreationTimestamp="2025-11-08 09:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:10.398440433 +0000 UTC m=+1.199166916" watchObservedRunningTime="2025-11-08 09:16:10.39895709 +0000 UTC m=+1.199683510"
	Nov 08 09:16:10 embed-certs-271910 kubelet[1304]: I1108 09:16:10.399121    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-embed-certs-271910" podStartSLOduration=1.39911459 podStartE2EDuration="1.39911459s" podCreationTimestamp="2025-11-08 09:16:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:10.385052585 +0000 UTC m=+1.185778990" watchObservedRunningTime="2025-11-08 09:16:10.39911459 +0000 UTC m=+1.199841002"
	Nov 08 09:16:13 embed-certs-271910 kubelet[1304]: I1108 09:16:13.545393    1304 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:16:13 embed-certs-271910 kubelet[1304]: I1108 09:16:13.546129    1304 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.626908    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpx4z\" (UniqueName: \"kubernetes.io/projected/8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c-kube-api-access-mpx4z\") pod \"kube-proxy-lwbl6\" (UID: \"8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c\") " pod="kube-system/kube-proxy-lwbl6"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.626988    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c-kube-proxy\") pod \"kube-proxy-lwbl6\" (UID: \"8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c\") " pod="kube-system/kube-proxy-lwbl6"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.627018    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c-xtables-lock\") pod \"kube-proxy-lwbl6\" (UID: \"8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c\") " pod="kube-system/kube-proxy-lwbl6"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.627168    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c-lib-modules\") pod \"kube-proxy-lwbl6\" (UID: \"8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c\") " pod="kube-system/kube-proxy-lwbl6"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.729504    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb346bcf-44a7-4255-a33c-fdb05b6193f2-xtables-lock\") pod \"kindnet-49l78\" (UID: \"bb346bcf-44a7-4255-a33c-fdb05b6193f2\") " pod="kube-system/kindnet-49l78"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.729566    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb346bcf-44a7-4255-a33c-fdb05b6193f2-lib-modules\") pod \"kindnet-49l78\" (UID: \"bb346bcf-44a7-4255-a33c-fdb05b6193f2\") " pod="kube-system/kindnet-49l78"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.729611    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4wtsh\" (UniqueName: \"kubernetes.io/projected/bb346bcf-44a7-4255-a33c-fdb05b6193f2-kube-api-access-4wtsh\") pod \"kindnet-49l78\" (UID: \"bb346bcf-44a7-4255-a33c-fdb05b6193f2\") " pod="kube-system/kindnet-49l78"
	Nov 08 09:16:14 embed-certs-271910 kubelet[1304]: I1108 09:16:14.729654    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/bb346bcf-44a7-4255-a33c-fdb05b6193f2-cni-cfg\") pod \"kindnet-49l78\" (UID: \"bb346bcf-44a7-4255-a33c-fdb05b6193f2\") " pod="kube-system/kindnet-49l78"
	Nov 08 09:16:15 embed-certs-271910 kubelet[1304]: I1108 09:16:15.362664    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-49l78" podStartSLOduration=1.362642638 podStartE2EDuration="1.362642638s" podCreationTimestamp="2025-11-08 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:15.362473544 +0000 UTC m=+6.163199956" watchObservedRunningTime="2025-11-08 09:16:15.362642638 +0000 UTC m=+6.163369051"
	Nov 08 09:16:15 embed-certs-271910 kubelet[1304]: I1108 09:16:15.377355    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-lwbl6" podStartSLOduration=1.377276099 podStartE2EDuration="1.377276099s" podCreationTimestamp="2025-11-08 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:15.376936583 +0000 UTC m=+6.177662995" watchObservedRunningTime="2025-11-08 09:16:15.377276099 +0000 UTC m=+6.178002512"
	Nov 08 09:16:25 embed-certs-271910 kubelet[1304]: I1108 09:16:25.954412    1304 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.015178    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzjwc\" (UniqueName: \"kubernetes.io/projected/b1a3271b-2b58-460a-98e7-29636a0c2860-kube-api-access-nzjwc\") pod \"coredns-66bc5c9577-cbw4j\" (UID: \"b1a3271b-2b58-460a-98e7-29636a0c2860\") " pod="kube-system/coredns-66bc5c9577-cbw4j"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.015230    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/69b5b176-edf7-4eda-82be-7e9980c13459-tmp\") pod \"storage-provisioner\" (UID: \"69b5b176-edf7-4eda-82be-7e9980c13459\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.015253    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbh24\" (UniqueName: \"kubernetes.io/projected/69b5b176-edf7-4eda-82be-7e9980c13459-kube-api-access-kbh24\") pod \"storage-provisioner\" (UID: \"69b5b176-edf7-4eda-82be-7e9980c13459\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.015302    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b1a3271b-2b58-460a-98e7-29636a0c2860-config-volume\") pod \"coredns-66bc5c9577-cbw4j\" (UID: \"b1a3271b-2b58-460a-98e7-29636a0c2860\") " pod="kube-system/coredns-66bc5c9577-cbw4j"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.400581    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-cbw4j" podStartSLOduration=12.40055723 podStartE2EDuration="12.40055723s" podCreationTimestamp="2025-11-08 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:26.3913788 +0000 UTC m=+17.192105211" watchObservedRunningTime="2025-11-08 09:16:26.40055723 +0000 UTC m=+17.201283634"
	Nov 08 09:16:26 embed-certs-271910 kubelet[1304]: I1108 09:16:26.400876    1304 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.400867885 podStartE2EDuration="12.400867885s" podCreationTimestamp="2025-11-08 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:26.400692103 +0000 UTC m=+17.201418517" watchObservedRunningTime="2025-11-08 09:16:26.400867885 +0000 UTC m=+17.201594296"
	Nov 08 09:16:29 embed-certs-271910 kubelet[1304]: I1108 09:16:29.235563    1304 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-245jq\" (UniqueName: \"kubernetes.io/projected/be77aed3-863e-433b-85af-7850d4a6cecd-kube-api-access-245jq\") pod \"busybox\" (UID: \"be77aed3-863e-433b-85af-7850d4a6cecd\") " pod="default/busybox"
	Nov 08 09:16:37 embed-certs-271910 kubelet[1304]: E1108 09:16:37.246695    1304 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:51836->127.0.0.1:42751: write tcp 127.0.0.1:51836->127.0.0.1:42751: write: broken pipe
	
	
	==> storage-provisioner [a91252af9f98db2e585e60eda3d9df0f8fd9a2ff9872051fa184d78f22ef63fe] <==
	I1108 09:16:26.351869       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:16:26.359782       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:16:26.359832       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:16:26.362014       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:26.367862       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:26.368050       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:16:26.368116       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e00d8485-fd2f-4aef-b7f8-239d96fe73e5", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-271910_24301787-14dd-4630-b438-c3a1f74d63fc became leader
	I1108 09:16:26.368184       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_24301787-14dd-4630-b438-c3a1f74d63fc!
	W1108 09:16:26.370413       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:26.377076       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:26.468428       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_24301787-14dd-4630-b438-c3a1f74d63fc!
	W1108 09:16:28.381247       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:28.385554       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:30.389169       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:30.394277       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:32.397445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:32.401445       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:34.405113       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:34.409935       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:36.413118       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:36.417034       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:38.420635       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:38.427690       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-271910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (2.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.35s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (262.793867ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:16:38Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-220714 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context no-preload-220714 describe deploy/metrics-server -n kube-system: exit status 1 (65.375929ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context no-preload-220714 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-220714
helpers_test.go:243: (dbg) docker inspect no-preload-220714:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	        "Created": "2025-11-08T09:15:34.135970344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 289326,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:15:34.175899771Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hostname",
	        "HostsPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hosts",
	        "LogPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d-json.log",
	        "Name": "/no-preload-220714",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-220714:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-220714",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	                "LowerDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-220714",
	                "Source": "/var/lib/docker/volumes/no-preload-220714/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-220714",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-220714",
	                "name.minikube.sigs.k8s.io": "no-preload-220714",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "2962b583b3eb832fcd2b891086c678dfea218efc0bd3aa8e411b8666ad5c1503",
	            "SandboxKey": "/var/run/docker/netns/2962b583b3eb",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33094"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33095"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33098"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33096"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33097"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-220714": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "ae:38:24:0e:3b:9d",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2c6206fd83352e5892c70867654eb8c3127b66df1d3abb8d7e06c7e601cea52",
	                    "EndpointID": "cf3eb4ff4f53baf9672eaf18ee5fcb2f3d86c0a433e75c3db935aea526a27cba",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-220714",
	                        "446e9eda1361"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-220714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-220714 logs -n 25: (1.127478107s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                   │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ ssh     │ -p bridge-732849 sudo systemctl status docker --all --full --no-pager                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat docker --no-pager                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/docker/daemon.json                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo docker system info                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl status cri-docker --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat cri-docker --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/systemd/system/cri-docker.service.d/10-cni.conf                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo cat /usr/lib/systemd/system/cri-docker.service                                                                                                     │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cri-dockerd --version                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status containerd --all --full --no-pager                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat containerd --no-pager                                                                                                                │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /lib/systemd/system/containerd.service                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/containerd/config.toml                                                                                                                    │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo containerd config dump                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                      │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                            │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                        │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                          │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                         │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	└─────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:16:14
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:16:14.619702  302884 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:14.620015  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620022  302884 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:14.620029  302884 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:14.620497  302884 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:16:14.621237  302884 out.go:368] Setting JSON to false
	I1108 09:16:14.623593  302884 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3526,"bootTime":1762589849,"procs":331,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:16:14.624451  302884 start.go:143] virtualization: kvm guest
	I1108 09:16:14.626457  302884 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:16:14.629520  302884 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:16:14.629524  302884 notify.go:221] Checking for updates...
	I1108 09:16:14.631258  302884 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:16:14.632595  302884 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:16:14.634002  302884 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:16:14.635485  302884 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:16:14.636691  302884 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:16:14.638679  302884 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638844  302884 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:14.638954  302884 config.go:182] Loaded profile config "old-k8s-version-339286": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:16:14.639063  302884 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:16:14.691152  302884 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:16:14.691332  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.813570  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.796199727 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.813892  302884 docker.go:319] overlay module found
	I1108 09:16:14.816970  302884 out.go:179] * Using the docker driver based on user configuration
	I1108 09:16:14.818292  302884 start.go:309] selected driver: docker
	I1108 09:16:14.818348  302884 start.go:930] validating driver "docker" against <nil>
	I1108 09:16:14.818374  302884 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:16:14.819199  302884 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:14.933520  302884 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:16:14.916255188 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:14.933793  302884 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 09:16:14.934044  302884 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.938665  302884 out.go:179] * Using Docker driver with root privileges
	I1108 09:16:14.940021  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:14.940170  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:14.940249  302884 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:16:14.940569  302884 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SS
HAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:14.943927  302884 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:16:14.945738  302884 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:16:14.946990  302884 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:16:14.067805  294020 cli_runner.go:164] Run: docker container inspect embed-certs-271910 --format={{.State.Status}}
	I1108 09:16:14.074060  294020 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.074085  294020 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.074146  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.103402  294020 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.103432  294020 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.103506  294020 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:14.108099  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.132496  294020 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33099 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:16:14.147070  294020 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.85.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.201882  294020 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.237829  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.253009  294020 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.432416  294020 start.go:977] {"host.minikube.internal": 192.168.85.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:14.437859  294020 node_ready.go:35] waiting up to 6m0s for node "embed-certs-271910" to be "Ready" ...
	I1108 09:16:14.957896  294020 kapi.go:214] "coredns" deployment in "kube-system" namespace and "embed-certs-271910" context rescaled to 1 replicas
	I1108 09:16:14.969443  294020 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:14.948456  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:14.948520  302884 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:16:14.948532  302884 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:16:14.948688  302884 cache.go:59] Caching tarball of preloaded images
	I1108 09:16:14.949020  302884 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:16:14.949079  302884 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:16:14.949215  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:14.949344  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json: {Name:mk5bfc4db394c708a6042a234b18539bd8dad38a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:14.984638  302884 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:16:14.984672  302884 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:16:14.984705  302884 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:16:14.984748  302884 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:14.984878  302884 start.go:364] duration metric: took 108.394µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:16:14.984915  302884 start.go:93] Provisioning new machine with config: &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default API
ServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:
false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:16:14.985006  302884 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:16:10.370669  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	W1108 09:16:12.868173  285556 node_ready.go:57] node "old-k8s-version-339286" has "Ready":"False" status (will retry)
	I1108 09:16:14.398457  285556 node_ready.go:49] node "old-k8s-version-339286" is "Ready"
	I1108 09:16:14.398745  285556 node_ready.go:38] duration metric: took 13.534293684s for node "old-k8s-version-339286" to be "Ready" ...
	I1108 09:16:14.398779  285556 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:14.398863  285556 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:14.426992  285556 api_server.go:72] duration metric: took 14.046193072s to wait for apiserver process to appear ...
	I1108 09:16:14.427020  285556 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:14.427040  285556 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:16:14.457535  285556 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:16:14.460756  285556 api_server.go:141] control plane version: v1.28.0
	I1108 09:16:14.460783  285556 api_server.go:131] duration metric: took 33.754556ms to wait for apiserver health ...
	I1108 09:16:14.460796  285556 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:14.468460  285556 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:14.468503  285556 system_pods.go:61] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.468511  285556 system_pods.go:61] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.468519  285556 system_pods.go:61] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.468524  285556 system_pods.go:61] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.468530  285556 system_pods.go:61] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.468534  285556 system_pods.go:61] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.468539  285556 system_pods.go:61] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.468545  285556 system_pods.go:61] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.468553  285556 system_pods.go:74] duration metric: took 7.750133ms to wait for pod list to return data ...
	I1108 09:16:14.468563  285556 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:14.473761  285556 default_sa.go:45] found service account: "default"
	I1108 09:16:14.473786  285556 default_sa.go:55] duration metric: took 5.215828ms for default service account to be created ...
	I1108 09:16:14.473811  285556 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:14.485871  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.485923  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.485932  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.485941  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.485953  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.485970  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.485975  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.485991  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.485998  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:14.486054  285556 retry.go:31] will retry after 246.902773ms: missing components: kube-dns
	I1108 09:16:14.744570  285556 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:14.744609  285556 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:14.744618  285556 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running
	I1108 09:16:14.744627  285556 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:14.744637  285556 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running
	I1108 09:16:14.744643  285556 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running
	I1108 09:16:14.744648  285556 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:14.744653  285556 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running
	I1108 09:16:14.744658  285556 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Running
	I1108 09:16:14.744667  285556 system_pods.go:126] duration metric: took 270.849268ms to wait for k8s-apps to be running ...
	I1108 09:16:14.744677  285556 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:14.744731  285556 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:14.769258  285556 system_svc.go:56] duration metric: took 24.56978ms WaitForService to wait for kubelet
	I1108 09:16:14.769309  285556 kubeadm.go:587] duration metric: took 14.388514306s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:14.769556  285556 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:14.774712  285556 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:14.774739  285556 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:14.774812  285556 node_conditions.go:105] duration metric: took 5.192043ms to run NodePressure ...
	I1108 09:16:14.774830  285556 start.go:242] waiting for startup goroutines ...
	I1108 09:16:14.774881  285556 start.go:247] waiting for cluster config update ...
	I1108 09:16:14.774895  285556 start.go:256] writing updated cluster config ...
	I1108 09:16:14.775329  285556 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:14.780932  285556 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:14.790003  285556 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:14.428477  288696 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.428494  288696 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:16:14.428555  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.459240  288696 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:14.459267  288696 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:16:14.459355  288696 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:14.477655  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.497326  288696 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33094 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:16:14.636260  288696 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.94.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:16:14.677268  288696 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:14.695739  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:16:14.805038  288696 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:16:15.046647  288696 start.go:977] {"host.minikube.internal": 192.168.94.1} host record injected into CoreDNS's ConfigMap
	I1108 09:16:15.048786  288696 node_ready.go:35] waiting up to 6m0s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:15.350945  288696 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:16:15.801076  285556 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:16:15.801161  285556 pod_ready.go:86] duration metric: took 1.011063973s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.805636  285556 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.811600  285556 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.811650  285556 pod_ready.go:86] duration metric: took 5.984998ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.816583  285556 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.823575  285556 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.823606  285556 pod_ready.go:86] duration metric: took 6.946404ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.827507  285556 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:15.995157  285556 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:16:15.995188  285556 pod_ready.go:86] duration metric: took 167.654484ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.194993  285556 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.594916  285556 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:16:16.594953  285556 pod_ready.go:86] duration metric: took 399.929202ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:16.795274  285556 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194081  285556 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:16:17.194107  285556 pod_ready.go:86] duration metric: took 398.769764ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:17.194123  285556 pod_ready.go:40] duration metric: took 2.41311476s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:17.240446  285556 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:16:17.242415  285556 out.go:203] 
	W1108 09:16:17.243926  285556 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:16:17.248943  285556 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:16:17.250772  285556 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:16:15.355429  288696 addons.go:515] duration metric: took 994.950876ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:16:15.554093  288696 kapi.go:214] "coredns" deployment in "kube-system" namespace and "no-preload-220714" context rescaled to 1 replicas
	W1108 09:16:17.051497  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:14.970722  294020 addons.go:515] duration metric: took 934.784036ms for enable addons: enabled=[storage-provisioner default-storageclass]
	W1108 09:16:16.442258  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:14.988644  302884 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:16:14.988941  302884 start.go:159] libmachine.API.Create for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:14.988979  302884 client.go:173] LocalClient.Create starting
	I1108 09:16:14.989121  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:16:14.989164  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989194  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989303  302884 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:16:14.989337  302884 main.go:143] libmachine: Decoding PEM data...
	I1108 09:16:14.989349  302884 main.go:143] libmachine: Parsing certificate...
	I1108 09:16:14.989787  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:16:15.020585  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:16:15.020664  302884 network_create.go:284] running [docker network inspect default-k8s-diff-port-677902] to gather additional debugging logs...
	I1108 09:16:15.020681  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902
	W1108 09:16:15.047609  302884 cli_runner.go:211] docker network inspect default-k8s-diff-port-677902 returned with exit code 1
	I1108 09:16:15.047686  302884 network_create.go:287] error running [docker network inspect default-k8s-diff-port-677902]: docker network inspect default-k8s-diff-port-677902: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network default-k8s-diff-port-677902 not found
	I1108 09:16:15.047745  302884 network_create.go:289] output of [docker network inspect default-k8s-diff-port-677902]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network default-k8s-diff-port-677902 not found
	
	** /stderr **
	I1108 09:16:15.048043  302884 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:15.076013  302884 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:16:15.076913  302884 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:16:15.077960  302884 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:16:15.079133  302884 network.go:206] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001ec8b10}
	I1108 09:16:15.079166  302884 network_create.go:124] attempt to create docker network default-k8s-diff-port-677902 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I1108 09:16:15.079219  302884 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 default-k8s-diff-port-677902
	I1108 09:16:15.171652  302884 network_create.go:108] docker network default-k8s-diff-port-677902 192.168.76.0/24 created
	I1108 09:16:15.171687  302884 kic.go:121] calculated static IP "192.168.76.2" for the "default-k8s-diff-port-677902" container
	I1108 09:16:15.171753  302884 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:16:15.199943  302884 cli_runner.go:164] Run: docker volume create default-k8s-diff-port-677902 --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:16:15.225618  302884 oci.go:103] Successfully created a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.225772  302884 cli_runner.go:164] Run: docker run --rm --name default-k8s-diff-port-677902-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --entrypoint /usr/bin/test -v default-k8s-diff-port-677902:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:16:15.866328  302884 oci.go:107] Successfully prepared a docker volume default-k8s-diff-port-677902
	I1108 09:16:15.866376  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:15.866401  302884 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:16:15.866471  302884 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:16:19.052301  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:21.552514  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:20.584332  302884 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v default-k8s-diff-port-677902:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.717760526s)
	I1108 09:16:20.584367  302884 kic.go:203] duration metric: took 4.717962939s to extract preloaded images to volume ...
	W1108 09:16:20.584469  302884 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:16:20.584509  302884 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:16:20.584562  302884 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:16:20.649658  302884 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname default-k8s-diff-port-677902 --name default-k8s-diff-port-677902 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=default-k8s-diff-port-677902 --network default-k8s-diff-port-677902 --ip 192.168.76.2 --volume default-k8s-diff-port-677902:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8444 --publish=127.0.0.1::8444 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:16:20.985463  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Running}}
	I1108 09:16:21.005078  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.023858  302884 cli_runner.go:164] Run: docker exec default-k8s-diff-port-677902 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:16:21.072397  302884 oci.go:144] the created container "default-k8s-diff-port-677902" has a running status.
	I1108 09:16:21.072432  302884 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa...
	I1108 09:16:21.328004  302884 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:16:21.358901  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.381864  302884 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:16:21.381926  302884 kic_runner.go:114] Args: [docker exec --privileged default-k8s-diff-port-677902 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:16:21.429674  302884 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:16:21.450173  302884 machine.go:94] provisionDockerMachine start ...
	I1108 09:16:21.450256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.471253  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.471544  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.471559  302884 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:16:21.604466  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.604500  302884 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:16:21.604558  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.625801  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.626035  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.626052  302884 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:16:21.767180  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:16:21.767256  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:21.786052  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:21.786341  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:21.786363  302884 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:16:21.917181  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:16:21.917219  302884 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:16:21.917239  302884 ubuntu.go:190] setting up certificates
	I1108 09:16:21.917247  302884 provision.go:84] configureAuth start
	I1108 09:16:21.917317  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:21.935307  302884 provision.go:143] copyHostCerts
	I1108 09:16:21.935370  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:16:21.935382  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:16:21.935449  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:16:21.935553  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:16:21.935562  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:16:21.935591  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:16:21.935701  302884 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:16:21.935713  302884 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:16:21.935739  302884 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:16:21.935803  302884 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:16:22.042345  302884 provision.go:177] copyRemoteCerts
	I1108 09:16:22.042398  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:16:22.042450  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.062501  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.156803  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:16:22.176432  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:16:22.194210  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:16:22.212199  302884 provision.go:87] duration metric: took 294.93803ms to configureAuth
	I1108 09:16:22.212230  302884 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:16:22.212437  302884 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:22.212551  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.231181  302884 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:22.231443  302884 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33104 <nil> <nil>}
	I1108 09:16:22.231463  302884 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:16:22.470271  302884 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:16:22.470308  302884 machine.go:97] duration metric: took 1.020112912s to provisionDockerMachine
	I1108 09:16:22.470320  302884 client.go:176] duration metric: took 7.481335007s to LocalClient.Create
	I1108 09:16:22.470341  302884 start.go:167] duration metric: took 7.481404005s to libmachine.API.Create "default-k8s-diff-port-677902"
	I1108 09:16:22.470350  302884 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:16:22.470362  302884 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:16:22.470433  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:16:22.470471  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.490818  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.586821  302884 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:16:22.590810  302884 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:16:22.590839  302884 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:16:22.590852  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:16:22.591149  302884 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:16:22.591343  302884 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:16:22.591507  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:16:22.600330  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:22.620675  302884 start.go:296] duration metric: took 150.312864ms for postStartSetup
	I1108 09:16:22.621005  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.638917  302884 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:16:22.639195  302884 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:16:22.639233  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.658713  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.750655  302884 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:16:22.755273  302884 start.go:128] duration metric: took 7.770253809s to createHost
	I1108 09:16:22.755312  302884 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 7.770414218s
	I1108 09:16:22.755394  302884 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:16:22.773899  302884 ssh_runner.go:195] Run: cat /version.json
	I1108 09:16:22.773917  302884 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:16:22.773948  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.773974  302884 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:16:22.794752  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.795127  302884 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33104 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:16:22.889663  302884 ssh_runner.go:195] Run: systemctl --version
	I1108 09:16:22.942216  302884 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:16:22.977581  302884 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:16:22.982348  302884 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:16:22.982411  302884 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:16:23.008837  302884 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:16:23.008860  302884 start.go:496] detecting cgroup driver to use...
	I1108 09:16:23.008896  302884 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:16:23.008949  302884 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:16:23.025177  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:16:23.037624  302884 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:16:23.037681  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:16:23.054660  302884 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:16:23.073210  302884 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:16:23.155568  302884 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:16:23.244179  302884 docker.go:234] disabling docker service ...
	I1108 09:16:23.244249  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:16:23.263226  302884 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:16:23.276679  302884 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:16:23.369719  302884 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:16:23.452958  302884 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:16:23.465534  302884 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:16:23.480351  302884 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:16:23.480429  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.490576  302884 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:16:23.490636  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.499772  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.508365  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.517456  302884 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:16:23.525954  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.535277  302884 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.549170  302884 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:16:23.558258  302884 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:16:23.565676  302884 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:16:23.573369  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:23.653541  302884 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:16:23.767673  302884 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:16:23.767729  302884 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:16:23.771780  302884 start.go:564] Will wait 60s for crictl version
	I1108 09:16:23.771829  302884 ssh_runner.go:195] Run: which crictl
	I1108 09:16:23.775330  302884 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:16:23.799928  302884 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:16:23.800010  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.827743  302884 ssh_runner.go:195] Run: crio --version
	I1108 09:16:23.857164  302884 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	W1108 09:16:18.941803  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:20.942622  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	W1108 09:16:23.441685  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:23.858390  302884 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:16:23.875734  302884 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:16:23.879850  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:23.890489  302884 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false C
ustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:16:23.890611  302884 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:23.890671  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.922889  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.922910  302884 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:16:23.922950  302884 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:16:23.948186  302884 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:16:23.948207  302884 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:16:23.948214  302884 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:16:23.948333  302884 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:16:23.948416  302884 ssh_runner.go:195] Run: crio config
	I1108 09:16:23.994577  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:23.994603  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:23.994707  302884 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:16:23.994758  302884 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:16:23.994909  302884 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:16:23.994977  302884 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:16:24.003550  302884 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:16:24.003613  302884 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:16:24.011668  302884 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:16:24.025570  302884 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:16:24.040656  302884 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:16:24.053685  302884 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:16:24.057813  302884 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:16:24.068090  302884 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:16:24.153388  302884 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:16:24.180756  302884 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:16:24.180778  302884 certs.go:195] generating shared ca certs ...
	I1108 09:16:24.180792  302884 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.180962  302884 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:16:24.181003  302884 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:16:24.181013  302884 certs.go:257] generating profile certs ...
	I1108 09:16:24.181084  302884 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:16:24.181110  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt with IP's: []
	I1108 09:16:24.249417  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt ...
	I1108 09:16:24.249443  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.crt: {Name:mkb0424a7b2244acd4c9b08e8fd3832ca89c8cee Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249643  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key ...
	I1108 09:16:24.249660  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key: {Name:mk98228a5537d26558a0a8aa80142320b934942d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.249773  302884 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:16:24.249793  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.76.2]
	I1108 09:16:24.369815  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 ...
	I1108 09:16:24.369843  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273: {Name:mkfff96a8818db7317888f2704b4dce1877844fd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370020  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 ...
	I1108 09:16:24.370036  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273: {Name:mkd7e2641bb265c1b14bb815272c25677391281b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.370138  302884 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt
	I1108 09:16:24.370218  302884 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key
	I1108 09:16:24.370275  302884 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:16:24.370302  302884 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt with IP's: []
	I1108 09:16:24.474350  302884 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt ...
	I1108 09:16:24.474381  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt: {Name:mk129990eb5be69a3128d0b5b94ee200eae7c775 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474565  302884 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key ...
	I1108 09:16:24.474588  302884 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key: {Name:mk588b95436fa4f4c5adaa76c8236e776fdef198 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:16:24.474803  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:16:24.474841  302884 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:16:24.474852  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:16:24.474873  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:16:24.474894  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:16:24.474915  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:16:24.474951  302884 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:16:24.475489  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:16:24.494518  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:16:24.512401  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:16:24.530678  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:16:24.548124  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:16:24.566472  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:16:24.584982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:16:24.603982  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	W1108 09:16:24.051828  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	W1108 09:16:26.552224  288696 node_ready.go:57] node "no-preload-220714" has "Ready":"False" status (will retry)
	I1108 09:16:27.551990  288696 node_ready.go:49] node "no-preload-220714" is "Ready"
	I1108 09:16:27.552021  288696 node_ready.go:38] duration metric: took 12.503203095s for node "no-preload-220714" to be "Ready" ...
	I1108 09:16:27.552043  288696 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:27.552094  288696 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:27.567072  288696 api_server.go:72] duration metric: took 13.20624104s to wait for apiserver process to appear ...
	I1108 09:16:27.567097  288696 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:27.567115  288696 api_server.go:253] Checking apiserver healthz at https://192.168.94.2:8443/healthz ...
	I1108 09:16:27.571234  288696 api_server.go:279] https://192.168.94.2:8443/healthz returned 200:
	ok
	I1108 09:16:27.572225  288696 api_server.go:141] control plane version: v1.34.1
	I1108 09:16:27.572252  288696 api_server.go:131] duration metric: took 5.147393ms to wait for apiserver health ...
	I1108 09:16:27.572262  288696 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:27.575571  288696 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:27.575606  288696 system_pods.go:61] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.575613  288696 system_pods.go:61] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.575621  288696 system_pods.go:61] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.575627  288696 system_pods.go:61] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.575636  288696 system_pods.go:61] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.575642  288696 system_pods.go:61] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.575649  288696 system_pods.go:61] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.575656  288696 system_pods.go:61] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.575667  288696 system_pods.go:74] duration metric: took 3.395544ms to wait for pod list to return data ...
	I1108 09:16:27.575676  288696 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:27.578421  288696 default_sa.go:45] found service account: "default"
	I1108 09:16:27.578442  288696 default_sa.go:55] duration metric: took 2.756827ms for default service account to be created ...
	I1108 09:16:27.578453  288696 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:27.581851  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.581882  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.581890  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.581898  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.581904  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.581909  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.581914  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.581918  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.581925  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.582377  288696 retry.go:31] will retry after 309.619866ms: missing components: kube-dns
	I1108 09:16:27.897123  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:27.897166  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:27.897176  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:27.897183  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:27.897189  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:27.897196  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:27.897201  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:27.897206  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:27.897213  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:27.897230  288696 retry.go:31] will retry after 292.226039ms: missing components: kube-dns
	W1108 09:16:25.442185  294020 node_ready.go:57] node "embed-certs-271910" has "Ready":"False" status (will retry)
	I1108 09:16:26.441536  294020 node_ready.go:49] node "embed-certs-271910" is "Ready"
	I1108 09:16:26.441573  294020 node_ready.go:38] duration metric: took 12.003041862s for node "embed-certs-271910" to be "Ready" ...
	I1108 09:16:26.441586  294020 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:16:26.441646  294020 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:16:26.454331  294020 api_server.go:72] duration metric: took 12.418379921s to wait for apiserver process to appear ...
	I1108 09:16:26.454357  294020 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:16:26.454382  294020 api_server.go:253] Checking apiserver healthz at https://192.168.85.2:8443/healthz ...
	I1108 09:16:26.458665  294020 api_server.go:279] https://192.168.85.2:8443/healthz returned 200:
	ok
	I1108 09:16:26.459882  294020 api_server.go:141] control plane version: v1.34.1
	I1108 09:16:26.459909  294020 api_server.go:131] duration metric: took 5.544789ms to wait for apiserver health ...
	I1108 09:16:26.459925  294020 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:26.463219  294020 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:26.463256  294020 system_pods.go:61] "coredns-66bc5c9577-cbw4j" [b1a3271b-2b58-460a-98e7-29636a0c2860] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:26.463263  294020 system_pods.go:61] "etcd-embed-certs-271910" [5ce2f3f4-0806-4e34-a0fc-82eb8ddedc8f] Running
	I1108 09:16:26.463270  294020 system_pods.go:61] "kindnet-49l78" [bb346bcf-44a7-4255-a33c-fdb05b6193f2] Running
	I1108 09:16:26.463276  294020 system_pods.go:61] "kube-apiserver-embed-certs-271910" [ed4f4bb9-d9c7-4258-b20d-8f6d8a3c2efa] Running
	I1108 09:16:26.463300  294020 system_pods.go:61] "kube-controller-manager-embed-certs-271910" [7f2587b6-bd76-413d-966a-01f8dc17858f] Running
	I1108 09:16:26.463306  294020 system_pods.go:61] "kube-proxy-lwbl6" [8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c] Running
	I1108 09:16:26.463315  294020 system_pods.go:61] "kube-scheduler-embed-certs-271910" [026e9843-832c-4e8e-8a26-831b5eaede98] Running
	I1108 09:16:26.463320  294020 system_pods.go:61] "storage-provisioner" [69b5b176-edf7-4eda-82be-7e9980c13459] Running
	I1108 09:16:26.463326  294020 system_pods.go:74] duration metric: took 3.393092ms to wait for pod list to return data ...
	I1108 09:16:26.463335  294020 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:26.465623  294020 default_sa.go:45] found service account: "default"
	I1108 09:16:26.465643  294020 default_sa.go:55] duration metric: took 2.299772ms for default service account to be created ...
	I1108 09:16:26.465652  294020 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:26.468371  294020 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:26.468405  294020 system_pods.go:89] "coredns-66bc5c9577-cbw4j" [b1a3271b-2b58-460a-98e7-29636a0c2860] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:26.468415  294020 system_pods.go:89] "etcd-embed-certs-271910" [5ce2f3f4-0806-4e34-a0fc-82eb8ddedc8f] Running
	I1108 09:16:26.468422  294020 system_pods.go:89] "kindnet-49l78" [bb346bcf-44a7-4255-a33c-fdb05b6193f2] Running
	I1108 09:16:26.468428  294020 system_pods.go:89] "kube-apiserver-embed-certs-271910" [ed4f4bb9-d9c7-4258-b20d-8f6d8a3c2efa] Running
	I1108 09:16:26.468434  294020 system_pods.go:89] "kube-controller-manager-embed-certs-271910" [7f2587b6-bd76-413d-966a-01f8dc17858f] Running
	I1108 09:16:26.468440  294020 system_pods.go:89] "kube-proxy-lwbl6" [8ea17a1e-d2e5-47f0-98ef-3ecceb4b786c] Running
	I1108 09:16:26.468446  294020 system_pods.go:89] "kube-scheduler-embed-certs-271910" [026e9843-832c-4e8e-8a26-831b5eaede98] Running
	I1108 09:16:26.468454  294020 system_pods.go:89] "storage-provisioner" [69b5b176-edf7-4eda-82be-7e9980c13459] Running
	I1108 09:16:26.468463  294020 system_pods.go:126] duration metric: took 2.804388ms to wait for k8s-apps to be running ...
	I1108 09:16:26.468475  294020 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:26.468534  294020 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:26.482166  294020 system_svc.go:56] duration metric: took 13.682703ms WaitForService to wait for kubelet
	I1108 09:16:26.482193  294020 kubeadm.go:587] duration metric: took 12.446246908s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:26.482214  294020 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:26.485327  294020 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:26.485356  294020 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:26.485372  294020 node_conditions.go:105] duration metric: took 3.153381ms to run NodePressure ...
	I1108 09:16:26.485386  294020 start.go:242] waiting for startup goroutines ...
	I1108 09:16:26.485396  294020 start.go:247] waiting for cluster config update ...
	I1108 09:16:26.485411  294020 start.go:256] writing updated cluster config ...
	I1108 09:16:26.485699  294020 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:26.489800  294020 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:26.493546  294020 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.499143  294020 pod_ready.go:94] pod "coredns-66bc5c9577-cbw4j" is "Ready"
	I1108 09:16:27.499173  294020 pod_ready.go:86] duration metric: took 1.005603354s for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.501546  294020 pod_ready.go:83] waiting for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.507048  294020 pod_ready.go:94] pod "etcd-embed-certs-271910" is "Ready"
	I1108 09:16:27.507073  294020 pod_ready.go:86] duration metric: took 5.504922ms for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.509054  294020 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.512694  294020 pod_ready.go:94] pod "kube-apiserver-embed-certs-271910" is "Ready"
	I1108 09:16:27.512715  294020 pod_ready.go:86] duration metric: took 3.646ms for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.514487  294020 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.697453  294020 pod_ready.go:94] pod "kube-controller-manager-embed-certs-271910" is "Ready"
	I1108 09:16:27.697476  294020 pod_ready.go:86] duration metric: took 182.972054ms for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:27.898149  294020 pod_ready.go:83] waiting for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.297629  294020 pod_ready.go:94] pod "kube-proxy-lwbl6" is "Ready"
	I1108 09:16:28.297663  294020 pod_ready.go:86] duration metric: took 399.483472ms for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.497998  294020 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.897338  294020 pod_ready.go:94] pod "kube-scheduler-embed-certs-271910" is "Ready"
	I1108 09:16:28.897364  294020 pod_ready.go:86] duration metric: took 399.337987ms for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:28.897376  294020 pod_ready.go:40] duration metric: took 2.407548053s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:28.950786  294020 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:16:28.952604  294020 out.go:179] * Done! kubectl is now configured to use "embed-certs-271910" cluster and "default" namespace by default
	I1108 09:16:24.622161  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:16:24.642050  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:16:24.660239  302884 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:16:24.678050  302884 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:16:24.691686  302884 ssh_runner.go:195] Run: openssl version
	I1108 09:16:24.697945  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:16:24.707064  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.711018  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.711107  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:16:24.746715  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:16:24.755710  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:16:24.764114  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.767998  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.768047  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:16:24.802977  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:16:24.811920  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:16:24.820490  302884 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.824538  302884 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.824586  302884 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:16:24.859077  302884 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:16:24.868630  302884 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:16:24.872519  302884 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:16:24.872569  302884 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false Cust
omQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:24.872624  302884 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:16:24.872677  302884 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:16:24.900788  302884 cri.go:89] found id: ""
	I1108 09:16:24.900863  302884 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:16:24.909357  302884 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:16:24.917330  302884 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:16:24.917379  302884 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:16:24.925073  302884 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:16:24.925089  302884 kubeadm.go:158] found existing configuration files:
	
	I1108 09:16:24.925129  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I1108 09:16:24.933049  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:16:24.933102  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:16:24.940684  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I1108 09:16:24.948512  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:16:24.948569  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:16:24.955672  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I1108 09:16:24.963146  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:16:24.963196  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:16:24.970559  302884 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I1108 09:16:24.978321  302884 kubeadm.go:164] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:16:24.978370  302884 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:16:24.985648  302884 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:16:25.048029  302884 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:16:25.112944  302884 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:16:28.193963  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.194002  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:28.194010  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.194016  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.194020  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.194024  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.194027  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.194029  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.194034  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:28.194082  288696 retry.go:31] will retry after 382.783963ms: missing components: kube-dns
	I1108 09:16:28.581516  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.581565  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:28.581575  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.581583  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.581589  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.581595  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.581600  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.581605  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.581620  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I1108 09:16:28.581636  288696 retry.go:31] will retry after 411.561067ms: missing components: kube-dns
	I1108 09:16:28.997583  288696 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:28.997612  288696 system_pods.go:89] "coredns-66bc5c9577-zdb97" [08217c32-38fe-4de7-a9d6-72575dc90891] Running
	I1108 09:16:28.997617  288696 system_pods.go:89] "etcd-no-preload-220714" [ef085faa-25f1-44d9-b25d-fb0d4ead67db] Running
	I1108 09:16:28.997621  288696 system_pods.go:89] "kindnet-9sg4x" [de643664-dad3-47e4-914d-a252519eabf4] Running
	I1108 09:16:28.997624  288696 system_pods.go:89] "kube-apiserver-no-preload-220714" [4ff0ae2d-aebc-4f8f-b117-564a32e0a64a] Running
	I1108 09:16:28.997628  288696 system_pods.go:89] "kube-controller-manager-no-preload-220714" [25169610-7bb9-45a1-8803-4d4dad0a58b1] Running
	I1108 09:16:28.997631  288696 system_pods.go:89] "kube-proxy-66cm9" [af9e3993-de19-4fa1-82c7-24f943b01a5a] Running
	I1108 09:16:28.997634  288696 system_pods.go:89] "kube-scheduler-no-preload-220714" [983c41d9-b34d-4140-ae61-165a5de92436] Running
	I1108 09:16:28.997637  288696 system_pods.go:89] "storage-provisioner" [e73cf787-c8e5-481b-af0a-1105a6ee932d] Running
	I1108 09:16:28.997643  288696 system_pods.go:126] duration metric: took 1.419185057s to wait for k8s-apps to be running ...
	I1108 09:16:28.997650  288696 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:28.997696  288696 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:29.013585  288696 system_svc.go:56] duration metric: took 15.92533ms WaitForService to wait for kubelet
	I1108 09:16:29.013619  288696 kubeadm.go:587] duration metric: took 14.652790412s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:29.013642  288696 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:29.016750  288696 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:29.016779  288696 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:29.016795  288696 node_conditions.go:105] duration metric: took 3.145779ms to run NodePressure ...
	I1108 09:16:29.016808  288696 start.go:242] waiting for startup goroutines ...
	I1108 09:16:29.016819  288696 start.go:247] waiting for cluster config update ...
	I1108 09:16:29.016856  288696 start.go:256] writing updated cluster config ...
	I1108 09:16:29.017134  288696 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:29.023264  288696 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:29.027422  288696 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.032160  288696 pod_ready.go:94] pod "coredns-66bc5c9577-zdb97" is "Ready"
	I1108 09:16:29.032183  288696 pod_ready.go:86] duration metric: took 4.738073ms for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.034406  288696 pod_ready.go:83] waiting for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.038508  288696 pod_ready.go:94] pod "etcd-no-preload-220714" is "Ready"
	I1108 09:16:29.038530  288696 pod_ready.go:86] duration metric: took 4.10382ms for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.040573  288696 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.044618  288696 pod_ready.go:94] pod "kube-apiserver-no-preload-220714" is "Ready"
	I1108 09:16:29.044639  288696 pod_ready.go:86] duration metric: took 4.044363ms for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.046698  288696 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.428886  288696 pod_ready.go:94] pod "kube-controller-manager-no-preload-220714" is "Ready"
	I1108 09:16:29.428927  288696 pod_ready.go:86] duration metric: took 382.210796ms for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:29.628632  288696 pod_ready.go:83] waiting for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.028531  288696 pod_ready.go:94] pod "kube-proxy-66cm9" is "Ready"
	I1108 09:16:30.028564  288696 pod_ready.go:86] duration metric: took 399.908302ms for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.227891  288696 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.628163  288696 pod_ready.go:94] pod "kube-scheduler-no-preload-220714" is "Ready"
	I1108 09:16:30.628191  288696 pod_ready.go:86] duration metric: took 400.274382ms for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:16:30.628205  288696 pod_ready.go:40] duration metric: took 1.604903677s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:30.675012  288696 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:16:30.677007  288696 out.go:179] * Done! kubectl is now configured to use "no-preload-220714" cluster and "default" namespace by default
	I1108 09:16:35.120895  302884 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:16:35.121004  302884 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:16:35.121175  302884 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:16:35.121292  302884 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:16:35.121353  302884 kubeadm.go:319] OS: Linux
	I1108 09:16:35.121435  302884 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:16:35.121506  302884 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:16:35.121565  302884 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:16:35.121638  302884 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:16:35.121724  302884 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:16:35.121806  302884 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:16:35.121887  302884 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:16:35.121964  302884 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:16:35.122058  302884 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:16:35.122184  302884 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:16:35.122330  302884 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:16:35.122408  302884 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:16:35.124893  302884 out.go:252]   - Generating certificates and keys ...
	I1108 09:16:35.124995  302884 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:16:35.125121  302884 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:16:35.125214  302884 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:16:35.125342  302884 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:16:35.125426  302884 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:16:35.125502  302884 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:16:35.125608  302884 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:16:35.125772  302884 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [default-k8s-diff-port-677902 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:16:35.125840  302884 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:16:35.125968  302884 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [default-k8s-diff-port-677902 localhost] and IPs [192.168.76.2 127.0.0.1 ::1]
	I1108 09:16:35.126073  302884 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:16:35.126170  302884 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:16:35.126238  302884 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:16:35.126344  302884 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:16:35.126420  302884 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:16:35.126498  302884 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:16:35.126572  302884 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:16:35.126677  302884 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:16:35.126758  302884 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:16:35.126870  302884 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:16:35.126956  302884 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:16:35.128406  302884 out.go:252]   - Booting up control plane ...
	I1108 09:16:35.128525  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:16:35.128638  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:16:35.128733  302884 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:16:35.128898  302884 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:16:35.128981  302884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:16:35.129074  302884 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:16:35.129147  302884 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:16:35.129182  302884 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:16:35.129349  302884 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:16:35.129440  302884 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:16:35.129495  302884 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.001760288s
	I1108 09:16:35.129587  302884 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:16:35.129669  302884 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.76.2:8444/livez
	I1108 09:16:35.129744  302884 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:16:35.129825  302884 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:16:35.129905  302884 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.504019821s
	I1108 09:16:35.129979  302884 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 1.960672888s
	I1108 09:16:35.130064  302884 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 3.501227376s
	I1108 09:16:35.130235  302884 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:16:35.130443  302884 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:16:35.130523  302884 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:16:35.130788  302884 kubeadm.go:319] [mark-control-plane] Marking the node default-k8s-diff-port-677902 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:16:35.130879  302884 kubeadm.go:319] [bootstrap-token] Using token: o1hqaz.w0k7ft9j12ywfau7
	I1108 09:16:35.132386  302884 out.go:252]   - Configuring RBAC rules ...
	I1108 09:16:35.132551  302884 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:16:35.132650  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:16:35.132870  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:16:35.133032  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:16:35.133201  302884 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:16:35.133336  302884 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:16:35.133438  302884 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:16:35.133475  302884 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:16:35.133518  302884 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:16:35.133524  302884 kubeadm.go:319] 
	I1108 09:16:35.133583  302884 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:16:35.133600  302884 kubeadm.go:319] 
	I1108 09:16:35.133719  302884 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:16:35.133728  302884 kubeadm.go:319] 
	I1108 09:16:35.133770  302884 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:16:35.133844  302884 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:16:35.133922  302884 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:16:35.133932  302884 kubeadm.go:319] 
	I1108 09:16:35.134014  302884 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:16:35.134024  302884 kubeadm.go:319] 
	I1108 09:16:35.134080  302884 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:16:35.134089  302884 kubeadm.go:319] 
	I1108 09:16:35.134129  302884 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:16:35.134191  302884 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:16:35.134247  302884 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:16:35.134252  302884 kubeadm.go:319] 
	I1108 09:16:35.134366  302884 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:16:35.134429  302884 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:16:35.134435  302884 kubeadm.go:319] 
	I1108 09:16:35.134518  302884 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8444 --token o1hqaz.w0k7ft9j12ywfau7 \
	I1108 09:16:35.134671  302884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 09:16:35.134700  302884 kubeadm.go:319] 	--control-plane 
	I1108 09:16:35.134706  302884 kubeadm.go:319] 
	I1108 09:16:35.134797  302884 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:16:35.134806  302884 kubeadm.go:319] 
	I1108 09:16:35.134911  302884 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8444 --token o1hqaz.w0k7ft9j12ywfau7 \
	I1108 09:16:35.135094  302884 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 09:16:35.135113  302884 cni.go:84] Creating CNI manager for ""
	I1108 09:16:35.135121  302884 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:35.136736  302884 out.go:179] * Configuring CNI (Container Networking Interface) ...
	
	
	==> CRI-O <==
	Nov 08 09:16:27 no-preload-220714 crio[778]: time="2025-11-08T09:16:27.641439181Z" level=info msg="Starting container: c26b9fd78ca7f5eea3759da710bfde6366e6fa05d3db5eb7822bc18e7723dce5" id=cdbf6e17-734c-42c2-883a-57c3ff1ebc21 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:27 no-preload-220714 crio[778]: time="2025-11-08T09:16:27.643636933Z" level=info msg="Started container" PID=2855 containerID=c26b9fd78ca7f5eea3759da710bfde6366e6fa05d3db5eb7822bc18e7723dce5 description=kube-system/coredns-66bc5c9577-zdb97/coredns id=cdbf6e17-734c-42c2-883a-57c3ff1ebc21 name=/runtime.v1.RuntimeService/StartContainer sandboxID=17acef47a0013b8eea829b5cb69fa7805b1a6b57fc1a3173cd03cf23a3e3a975
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.173258158Z" level=info msg="Running pod sandbox: default/busybox/POD" id=80c09dbd-4ac6-43a3-b403-9f7c9b6c0142 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.173409648Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.179348695Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a0a969627625af33fd18dc1313233410a648795ce40503e61c0e77d7e64b24c0 UID:79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e NetNS:/var/run/netns/79cba1cb-3f37-4397-9949-975203a71cdf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059eb38}] Aliases:map[]}"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.179374974Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.194859061Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:a0a969627625af33fd18dc1313233410a648795ce40503e61c0e77d7e64b24c0 UID:79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e NetNS:/var/run/netns/79cba1cb-3f37-4397-9949-975203a71cdf Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc00059eb38}] Aliases:map[]}"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.195010747Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.195872898Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.197611831Z" level=info msg="Ran pod sandbox a0a969627625af33fd18dc1313233410a648795ce40503e61c0e77d7e64b24c0 with infra container: default/busybox/POD" id=80c09dbd-4ac6-43a3-b403-9f7c9b6c0142 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.199238007Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=4f550871-4b28-494f-a391-d385113366bd name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.199511982Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=4f550871-4b28-494f-a391-d385113366bd name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.199564061Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=4f550871-4b28-494f-a391-d385113366bd name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.200275752Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=14922d08-a07a-43b5-aac5-e6e1e232624a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:31 no-preload-220714 crio[778]: time="2025-11-08T09:16:31.202307998Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.549051186Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=14922d08-a07a-43b5-aac5-e6e1e232624a name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.5496453Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=be61581a-88f2-4205-ad07-674423545dd0 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.550928387Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=28d36c69-6a54-4564-a7f2-15bbc7b47e03 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.554527577Z" level=info msg="Creating container: default/busybox/busybox" id=f7f37dbd-eea5-423d-a9b6-217d5908ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.554651708Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.55823963Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.558786008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.580619625Z" level=info msg="Created container 7754516f9662d6c384175245e59ba22a83c42a4b4cb24c426b35372f03c3bd12: default/busybox/busybox" id=f7f37dbd-eea5-423d-a9b6-217d5908ee99 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.581222885Z" level=info msg="Starting container: 7754516f9662d6c384175245e59ba22a83c42a4b4cb24c426b35372f03c3bd12" id=e837c5fb-8845-4678-9319-6eddaadab560 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:32 no-preload-220714 crio[778]: time="2025-11-08T09:16:32.583228016Z" level=info msg="Started container" PID=2930 containerID=7754516f9662d6c384175245e59ba22a83c42a4b4cb24c426b35372f03c3bd12 description=default/busybox/busybox id=e837c5fb-8845-4678-9319-6eddaadab560 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a0a969627625af33fd18dc1313233410a648795ce40503e61c0e77d7e64b24c0
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	7754516f9662d       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   6 seconds ago       Running             busybox                   0                   a0a969627625a       busybox                                     default
	c26b9fd78ca7f       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      11 seconds ago      Running             coredns                   0                   17acef47a0013       coredns-66bc5c9577-zdb97                    kube-system
	71ca96002ce32       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      11 seconds ago      Running             storage-provisioner       0                   1ce1c73a6e75b       storage-provisioner                         kube-system
	b082da418a389       docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11    23 seconds ago      Running             kindnet-cni               0                   a2a8604c2a085       kindnet-9sg4x                               kube-system
	9bd6c544d01ea       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      24 seconds ago      Running             kube-proxy                0                   9c2f521b3e2fd       kube-proxy-66cm9                            kube-system
	c87bee4077487       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      35 seconds ago      Running             kube-controller-manager   0                   b5b4bff5ba512       kube-controller-manager-no-preload-220714   kube-system
	ebe00a8a7d599       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      35 seconds ago      Running             kube-apiserver            0                   d09723da6f73e       kube-apiserver-no-preload-220714            kube-system
	07050fab84966       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      35 seconds ago      Running             kube-scheduler            0                   3de2b1407b337       kube-scheduler-no-preload-220714            kube-system
	1a9fb503a89cf       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      35 seconds ago      Running             etcd                      0                   355e3d297b8a4       etcd-no-preload-220714                      kube-system
	
	
	==> coredns [c26b9fd78ca7f5eea3759da710bfde6366e6fa05d3db5eb7822bc18e7723dce5] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:46588 - 62951 "HINFO IN 504337965418073777.9121045763943562155. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.418386303s
	
	
	==> describe nodes <==
	Name:               no-preload-220714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-220714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-220714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-220714
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:16:39 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:16:39 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:16:39 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:16:39 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:16:39 +0000   Sat, 08 Nov 2025 09:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-220714
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a3fafd7f-70e4-4709-9069-846d0b2022cf
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9s
	  kube-system                 coredns-66bc5c9577-zdb97                     100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     25s
	  kube-system                 etcd-no-preload-220714                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         32s
	  kube-system                 kindnet-9sg4x                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      26s
	  kube-system                 kube-apiserver-no-preload-220714             250m (3%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 kube-controller-manager-no-preload-220714    200m (2%)     0 (0%)      0 (0%)           0 (0%)         31s
	  kube-system                 kube-proxy-66cm9                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         26s
	  kube-system                 kube-scheduler-no-preload-220714             100m (1%)     0 (0%)      0 (0%)           0 (0%)         32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 24s                kube-proxy       
	  Normal  Starting                 36s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  36s (x8 over 36s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    36s (x8 over 36s)  kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     36s (x8 over 36s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  Starting                 31s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  31s                kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    31s                kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     31s                kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           26s                node-controller  Node no-preload-220714 event: Registered Node no-preload-220714 in Controller
	  Normal  NodeReady                12s                kubelet          Node no-preload-220714 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [1a9fb503a89cf32e00afb351c8b387ebfb897542eceedbce120b85a4636654d6] <==
	{"level":"warn","ts":"2025-11-08T09:16:05.484362Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49934","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.492604Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49948","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.502180Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:49976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.516181Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50026","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.530483Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50064","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.540432Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50152","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.548214Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50186","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.556023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50204","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.564831Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50238","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.572380Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50276","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.580754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50310","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.590665Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50342","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.597458Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50368","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.606128Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50404","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.613773Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50446","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.628510Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50532","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.636469Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.645043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50570","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:05.712952Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:50734","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:16:19.191890Z","caller":"traceutil/trace.go:172","msg":"trace[1891183040] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"132.989933ms","start":"2025-11-08T09:16:19.058881Z","end":"2025-11-08T09:16:19.191871Z","steps":["trace[1891183040] 'process raft request'  (duration: 132.856327ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:16:19.348263Z","caller":"traceutil/trace.go:172","msg":"trace[1423564912] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"148.052375ms","start":"2025-11-08T09:16:19.200189Z","end":"2025-11-08T09:16:19.348241Z","steps":["trace[1423564912] 'process raft request'  (duration: 147.917887ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:16:19.665555Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"150.076174ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:16:19.665655Z","caller":"traceutil/trace.go:172","msg":"trace[1756290145] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:429; }","duration":"150.197683ms","start":"2025-11-08T09:16:19.515441Z","end":"2025-11-08T09:16:19.665639Z","steps":["trace[1756290145] 'range keys from in-memory index tree'  (duration: 150.027929ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:16:19.665603Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"115.087105ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/no-preload-220714\" limit:1 ","response":"range_response_count:1 size:4571"}
	{"level":"info","ts":"2025-11-08T09:16:19.665751Z","caller":"traceutil/trace.go:172","msg":"trace[78383419] range","detail":"{range_begin:/registry/minions/no-preload-220714; range_end:; response_count:1; response_revision:429; }","duration":"115.246006ms","start":"2025-11-08T09:16:19.550491Z","end":"2025-11-08T09:16:19.665737Z","steps":["trace[78383419] 'range keys from in-memory index tree'  (duration: 114.950844ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:16:39 up 59 min,  0 user,  load average: 5.16, 3.89, 2.45
	Linux no-preload-220714 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [b082da418a389e52d75f1192a073f2ec8325b914ad07ab5b3e14b6e4df97cc0d] <==
	I1108 09:16:16.639268       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:16.639642       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:16:16.639840       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:16.639864       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:16.639900       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:16Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:16.840375       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:16.840407       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:16.840419       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:16.940086       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:17.340681       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:17.340719       1 metrics.go:72] Registering metrics
	I1108 09:16:17.340827       1 controller.go:711] "Syncing nftables rules"
	I1108 09:16:26.846756       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:16:26.846818       1 main.go:301] handling current node
	I1108 09:16:36.843346       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:16:36.843393       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ebe00a8a7d599ca7948d92a68b2216c544db25c962382e6b0be2c539302ffb35] <==
	I1108 09:16:06.299758       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:16:06.300621       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:16:06.300867       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:06.308707       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:16:06.309389       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:16:06.310057       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:06.497503       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:16:07.203776       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:16:07.212509       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:16:07.212538       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:16:07.759013       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:16:07.803843       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:16:07.908655       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:16:07.919987       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.94.2]
	I1108 09:16:07.921432       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:16:07.927576       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:16:08.237672       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:16:08.704165       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:16:08.714632       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:16:08.724497       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:16:13.991217       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	I1108 09:16:14.102732       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:14.109915       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:14.349673       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	E1108 09:16:37.959006       1 conn.go:339] Error on socket receive: read tcp 192.168.94.2:8443->192.168.94.1:49972: use of closed network connection
	
	
	==> kube-controller-manager [c87bee40774879b2eefc2ee0c87bb69745cc2fc1e49f7ed99fbd9e2cffd0e734] <==
	I1108 09:16:13.235191       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:16:13.235320       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:16:13.235470       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:16:13.235556       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:16:13.235713       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:16:13.235736       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:16:13.235758       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:16:13.235844       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:16:13.235900       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:16:13.236005       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:16:13.236118       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:16:13.236355       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:16:13.236725       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:16:13.236909       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:16:13.237866       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:16:13.240173       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:16:13.242533       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:16:13.242588       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:16:13.243821       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:16:13.251737       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:16:13.251761       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:16:13.251768       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:16:13.253797       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:16:13.259363       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:16:28.259759       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [9bd6c544d01eade38915fb6b456619a87a81e2aa304c682c6c6788f81965b0aa] <==
	I1108 09:16:14.853312       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:16:15.042171       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:16:15.143756       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:16:15.143812       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:16:15.143898       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:16:15.176208       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:15.176371       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:16:15.185341       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:16:15.185990       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:16:15.186227       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:15.188736       1 config.go:309] "Starting node config controller"
	I1108 09:16:15.188761       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:16:15.188887       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:16:15.188919       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:16:15.189059       1 config.go:200] "Starting service config controller"
	I1108 09:16:15.189121       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:16:15.189087       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:16:15.189393       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:16:15.289795       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:16:15.289814       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:16:15.289845       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:16:15.289864       1 shared_informer.go:356] "Caches are synced" controller="node config"
	
	
	==> kube-scheduler [07050fab8496667df13b466be340c30c0e41811b15bf3933cfd01c7dbee0c73e] <==
	E1108 09:16:06.265501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:16:06.265764       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:16:06.265876       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:16:06.266142       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:16:06.266266       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:16:06.266301       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:16:06.266319       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:16:06.266444       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:06.266617       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:06.266688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:16:06.266722       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:06.266745       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:16:06.266940       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:16:06.267057       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:16:07.122439       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:16:07.143734       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:16:07.167368       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:07.168414       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:16:07.185805       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:16:07.192019       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:07.226601       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:16:07.299522       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:07.313690       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:07.498712       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	I1108 09:16:08.958650       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:16:09 no-preload-220714 kubelet[2245]: E1108 09:16:09.579600    2245 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-no-preload-220714\" already exists" pod="kube-system/kube-apiserver-no-preload-220714"
	Nov 08 09:16:09 no-preload-220714 kubelet[2245]: I1108 09:16:09.595790    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-no-preload-220714" podStartSLOduration=1.595747695 podStartE2EDuration="1.595747695s" podCreationTimestamp="2025-11-08 09:16:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:09.594798585 +0000 UTC m=+1.149095726" watchObservedRunningTime="2025-11-08 09:16:09.595747695 +0000 UTC m=+1.150044830"
	Nov 08 09:16:09 no-preload-220714 kubelet[2245]: I1108 09:16:09.596663    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-no-preload-220714" podStartSLOduration=2.596645154 podStartE2EDuration="2.596645154s" podCreationTimestamp="2025-11-08 09:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:09.579677222 +0000 UTC m=+1.133974367" watchObservedRunningTime="2025-11-08 09:16:09.596645154 +0000 UTC m=+1.150942300"
	Nov 08 09:16:09 no-preload-220714 kubelet[2245]: I1108 09:16:09.607784    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-no-preload-220714" podStartSLOduration=2.607761706 podStartE2EDuration="2.607761706s" podCreationTimestamp="2025-11-08 09:16:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:09.607729294 +0000 UTC m=+1.162026438" watchObservedRunningTime="2025-11-08 09:16:09.607761706 +0000 UTC m=+1.162058850"
	Nov 08 09:16:13 no-preload-220714 kubelet[2245]: I1108 09:16:13.277827    2245 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:16:13 no-preload-220714 kubelet[2245]: I1108 09:16:13.278555    2245 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060536    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de643664-dad3-47e4-914d-a252519eabf4-lib-modules\") pod \"kindnet-9sg4x\" (UID: \"de643664-dad3-47e4-914d-a252519eabf4\") " pod="kube-system/kindnet-9sg4x"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060595    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/af9e3993-de19-4fa1-82c7-24f943b01a5a-kube-proxy\") pod \"kube-proxy-66cm9\" (UID: \"af9e3993-de19-4fa1-82c7-24f943b01a5a\") " pod="kube-system/kube-proxy-66cm9"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060623    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/af9e3993-de19-4fa1-82c7-24f943b01a5a-lib-modules\") pod \"kube-proxy-66cm9\" (UID: \"af9e3993-de19-4fa1-82c7-24f943b01a5a\") " pod="kube-system/kube-proxy-66cm9"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060649    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvcm2\" (UniqueName: \"kubernetes.io/projected/af9e3993-de19-4fa1-82c7-24f943b01a5a-kube-api-access-tvcm2\") pod \"kube-proxy-66cm9\" (UID: \"af9e3993-de19-4fa1-82c7-24f943b01a5a\") " pod="kube-system/kube-proxy-66cm9"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060675    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/af9e3993-de19-4fa1-82c7-24f943b01a5a-xtables-lock\") pod \"kube-proxy-66cm9\" (UID: \"af9e3993-de19-4fa1-82c7-24f943b01a5a\") " pod="kube-system/kube-proxy-66cm9"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060696    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/de643664-dad3-47e4-914d-a252519eabf4-cni-cfg\") pod \"kindnet-9sg4x\" (UID: \"de643664-dad3-47e4-914d-a252519eabf4\") " pod="kube-system/kindnet-9sg4x"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060723    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de643664-dad3-47e4-914d-a252519eabf4-xtables-lock\") pod \"kindnet-9sg4x\" (UID: \"de643664-dad3-47e4-914d-a252519eabf4\") " pod="kube-system/kindnet-9sg4x"
	Nov 08 09:16:14 no-preload-220714 kubelet[2245]: I1108 09:16:14.060743    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fvq2z\" (UniqueName: \"kubernetes.io/projected/de643664-dad3-47e4-914d-a252519eabf4-kube-api-access-fvq2z\") pod \"kindnet-9sg4x\" (UID: \"de643664-dad3-47e4-914d-a252519eabf4\") " pod="kube-system/kindnet-9sg4x"
	Nov 08 09:16:15 no-preload-220714 kubelet[2245]: I1108 09:16:15.614684    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-66cm9" podStartSLOduration=2.614660315 podStartE2EDuration="2.614660315s" podCreationTimestamp="2025-11-08 09:16:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:15.613871273 +0000 UTC m=+7.168168435" watchObservedRunningTime="2025-11-08 09:16:15.614660315 +0000 UTC m=+7.168957459"
	Nov 08 09:16:16 no-preload-220714 kubelet[2245]: I1108 09:16:16.625007    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-9sg4x" podStartSLOduration=1.67060152 podStartE2EDuration="3.624981925s" podCreationTimestamp="2025-11-08 09:16:13 +0000 UTC" firstStartedPulling="2025-11-08 09:16:14.351377873 +0000 UTC m=+5.905675012" lastFinishedPulling="2025-11-08 09:16:16.305758273 +0000 UTC m=+7.860055417" observedRunningTime="2025-11-08 09:16:16.613403133 +0000 UTC m=+8.167700277" watchObservedRunningTime="2025-11-08 09:16:16.624981925 +0000 UTC m=+8.179279070"
	Nov 08 09:16:27 no-preload-220714 kubelet[2245]: I1108 09:16:27.252558    2245 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:16:27 no-preload-220714 kubelet[2245]: I1108 09:16:27.365227    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/e73cf787-c8e5-481b-af0a-1105a6ee932d-tmp\") pod \"storage-provisioner\" (UID: \"e73cf787-c8e5-481b-af0a-1105a6ee932d\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:27 no-preload-220714 kubelet[2245]: I1108 09:16:27.365276    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08217c32-38fe-4de7-a9d6-72575dc90891-config-volume\") pod \"coredns-66bc5c9577-zdb97\" (UID: \"08217c32-38fe-4de7-a9d6-72575dc90891\") " pod="kube-system/coredns-66bc5c9577-zdb97"
	Nov 08 09:16:27 no-preload-220714 kubelet[2245]: I1108 09:16:27.365326    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kt2vt\" (UniqueName: \"kubernetes.io/projected/08217c32-38fe-4de7-a9d6-72575dc90891-kube-api-access-kt2vt\") pod \"coredns-66bc5c9577-zdb97\" (UID: \"08217c32-38fe-4de7-a9d6-72575dc90891\") " pod="kube-system/coredns-66bc5c9577-zdb97"
	Nov 08 09:16:27 no-preload-220714 kubelet[2245]: I1108 09:16:27.365357    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mww\" (UniqueName: \"kubernetes.io/projected/e73cf787-c8e5-481b-af0a-1105a6ee932d-kube-api-access-62mww\") pod \"storage-provisioner\" (UID: \"e73cf787-c8e5-481b-af0a-1105a6ee932d\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:28 no-preload-220714 kubelet[2245]: I1108 09:16:28.643166    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-zdb97" podStartSLOduration=14.64314416 podStartE2EDuration="14.64314416s" podCreationTimestamp="2025-11-08 09:16:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:28.643049476 +0000 UTC m=+20.197346624" watchObservedRunningTime="2025-11-08 09:16:28.64314416 +0000 UTC m=+20.197441304"
	Nov 08 09:16:28 no-preload-220714 kubelet[2245]: I1108 09:16:28.655256    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=13.655232165 podStartE2EDuration="13.655232165s" podCreationTimestamp="2025-11-08 09:16:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:28.654018305 +0000 UTC m=+20.208315449" watchObservedRunningTime="2025-11-08 09:16:28.655232165 +0000 UTC m=+20.209529310"
	Nov 08 09:16:30 no-preload-220714 kubelet[2245]: I1108 09:16:30.888654    2245 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crfq2\" (UniqueName: \"kubernetes.io/projected/79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e-kube-api-access-crfq2\") pod \"busybox\" (UID: \"79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e\") " pod="default/busybox"
	Nov 08 09:16:32 no-preload-220714 kubelet[2245]: I1108 09:16:32.653604    2245 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=1.30307946 podStartE2EDuration="2.65358615s" podCreationTimestamp="2025-11-08 09:16:30 +0000 UTC" firstStartedPulling="2025-11-08 09:16:31.199925657 +0000 UTC m=+22.754222785" lastFinishedPulling="2025-11-08 09:16:32.55043234 +0000 UTC m=+24.104729475" observedRunningTime="2025-11-08 09:16:32.653357585 +0000 UTC m=+24.207654731" watchObservedRunningTime="2025-11-08 09:16:32.65358615 +0000 UTC m=+24.207883294"
	
	
	==> storage-provisioner [71ca96002ce32ca87fd962ff6114e6368c6529a1b02e221971ebac111c3b3a4e] <==
	I1108 09:16:27.649067       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:16:27.658181       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:16:27.658231       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:16:27.660749       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:27.665847       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:27.666058       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:16:27.666182       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43e594f5-edfa-4361-8eb4-8fe5628502f4", APIVersion:"v1", ResourceVersion:"446", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-220714_a35068b7-55a5-4fae-9cd3-22410d0f2584 became leader
	I1108 09:16:27.666261       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-220714_a35068b7-55a5-4fae-9cd3-22410d0f2584!
	W1108 09:16:27.668631       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:27.673740       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:27.767347       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-220714_a35068b7-55a5-4fae-9cd3-22410d0f2584!
	W1108 09:16:29.677465       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:29.683050       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:31.686867       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:31.691962       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:33.695516       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:33.699037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:35.702675       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:35.707709       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:37.710900       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:37.714766       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:39.718210       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:39.722194       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-220714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (2.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.9s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (277.080656ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:02Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:213: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-677902 describe deploy/metrics-server -n kube-system: exit status 1 (68.993585ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): deployments.apps "metrics-server" not found

                                                
                                                
** /stderr **
start_stop_delete_test.go:215: failed to get info on auto-pause deployments. args "kubectl --context default-k8s-diff-port-677902 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:219: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-677902
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-677902:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	        "Created": "2025-11-08T09:16:20.668171946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 304056,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:20.710453907Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hosts",
	        "LogPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2-json.log",
	        "Name": "/default-k8s-diff-port-677902",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-677902:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-677902",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	                "LowerDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-677902",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-677902/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-677902",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5de0ad32d1e1f89291be3ba6d0e5badad5caec29f086c4ecd3ce2d4777b52518",
	            "SandboxKey": "/var/run/docker/netns/5de0ad32d1e1",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33104"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33105"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33108"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33106"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33107"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-677902": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "0a:14:91:0e:1b:c8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3530cc966e776b586ccf4d2edbdd1f526df4bef1d7edd4ef4684fbf79284383f",
	                    "EndpointID": "b5ed9b3989dfbf1c3b31452f9abb615b80ad9b16514bd4c0ba53c3b8700c6165",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-677902",
	                        "1e7d7f902c4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25: (1.526014358s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-732849 sudo systemctl status containerd --all --full --no-pager                                                                                                                                                                     │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ ssh     │ -p bridge-732849 sudo systemctl cat containerd --no-pager                                                                                                                                                                                     │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /lib/systemd/system/containerd.service                                                                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo cat /etc/containerd/config.toml                                                                                                                                                                                         │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo containerd config dump                                                                                                                                                                                                  │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                                                                                               │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:16:56
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:16:56.948965  313008 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:16:56.949095  313008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:56.949106  313008 out.go:374] Setting ErrFile to fd 2...
	I1108 09:16:56.949112  313008 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:16:56.949437  313008 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:16:56.950031  313008 out.go:368] Setting JSON to false
	I1108 09:16:56.951568  313008 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3568,"bootTime":1762589849,"procs":315,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:16:56.951692  313008 start.go:143] virtualization: kvm guest
	I1108 09:16:56.953908  313008 out.go:179] * [no-preload-220714] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:16:56.955271  313008 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:16:56.955363  313008 notify.go:221] Checking for updates...
	I1108 09:16:56.958270  313008 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:16:56.959846  313008 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:16:56.961094  313008 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:16:56.962445  313008 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:16:56.963855  313008 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:16:56.965712  313008 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:16:56.966922  313008 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:16:56.995609  313008 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:16:56.995711  313008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:57.059166  313008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-08 09:16:57.047149062 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:57.059258  313008 docker.go:319] overlay module found
	I1108 09:16:57.061447  313008 out.go:179] * Using the docker driver based on existing profile
	I1108 09:16:57.062951  313008 start.go:309] selected driver: docker
	I1108 09:16:57.062972  313008 start.go:930] validating driver "docker" against &{Name:no-preload-220714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-220714 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9
p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:57.063128  313008 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:16:57.063772  313008 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:16:57.128136  313008 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:66 OomKillDisable:false NGoroutines:78 SystemTime:2025-11-08 09:16:57.117475316 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:16:57.128505  313008 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:57.128545  313008 cni.go:84] Creating CNI manager for ""
	I1108 09:16:57.128604  313008 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:16:57.128657  313008 start.go:353] cluster config:
	{Name:no-preload-220714 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:no-preload-220714 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.94.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false
DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:16:57.130604  313008 out.go:179] * Starting "no-preload-220714" primary control-plane node in "no-preload-220714" cluster
	I1108 09:16:57.131747  313008 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:16:57.133148  313008 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:16:57.134216  313008 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:16:57.134242  313008 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:16:57.134338  313008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/no-preload-220714/config.json ...
	I1108 09:16:57.134475  313008 cache.go:107] acquiring lock: {Name:mk2d624179db9cbd67adb3aec3cfd046671def22 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134478  313008 cache.go:107] acquiring lock: {Name:mk04c660b1b46b72d2e29bb81be0a7bbb06c3ac8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134500  313008 cache.go:107] acquiring lock: {Name:mkeb195ae675568135232cb6bee1e07158f7d344 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134544  313008 cache.go:107] acquiring lock: {Name:mk73b2692f3a049a0505998e9024c21dcd4ff951 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134562  313008 cache.go:107] acquiring lock: {Name:mkb1963ca20a7d3f27d6f30f3e5806807002effd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134607  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I1108 09:16:57.134581  313008 cache.go:107] acquiring lock: {Name:mkcbaa89f5144514823a4deffcb08519f8a3ea07 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134617  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 exists
	I1108 09:16:57.134593  313008 cache.go:107] acquiring lock: {Name:mk83f563160f56448e5c87bc103cde7d33095e6e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134619  313008 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 157.19µs
	I1108 09:16:57.134623  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 exists
	I1108 09:16:57.134627  313008 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.12.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1" took 68.435µs
	I1108 09:16:57.134636  313008 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I1108 09:16:57.134638  313008 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.12.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.12.1 succeeded
	I1108 09:16:57.134638  313008 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.34.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1" took 125.133µs
	I1108 09:16:57.134607  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 exists
	I1108 09:16:57.134648  313008 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.34.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.34.1 succeeded
	I1108 09:16:57.134645  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 exists
	I1108 09:16:57.134655  313008 cache.go:96] cache image "registry.k8s.io/pause:3.10.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1" took 163.358µs
	I1108 09:16:57.134661  313008 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.34.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1" took 197.13µs
	I1108 09:16:57.134664  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 exists
	I1108 09:16:57.134668  313008 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.34.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.34.1 succeeded
	I1108 09:16:57.134663  313008 cache.go:80] save to tar file registry.k8s.io/pause:3.10.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/pause_3.10.1 succeeded
	I1108 09:16:57.134674  313008 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.34.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1" took 97.657µs
	I1108 09:16:57.134634  313008 cache.go:107] acquiring lock: {Name:mk34e3be9831359fb463cd325249b8932adcd236 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.134682  313008 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.34.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.34.1 succeeded
	I1108 09:16:57.134698  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 exists
	I1108 09:16:57.134723  313008 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.34.1" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1" took 179.408µs
	I1108 09:16:57.134736  313008 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.34.1 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.34.1 succeeded
	I1108 09:16:57.134776  313008 cache.go:115] /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 exists
	I1108 09:16:57.134791  313008 cache.go:96] cache image "registry.k8s.io/etcd:3.6.4-0" -> "/home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0" took 206.878µs
	I1108 09:16:57.134813  313008 cache.go:80] save to tar file registry.k8s.io/etcd:3.6.4-0 -> /home/jenkins/minikube-integration/21866-5860/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.6.4-0 succeeded
	I1108 09:16:57.134823  313008 cache.go:87] Successfully saved all images to host disk.
	I1108 09:16:57.155295  313008 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:16:57.155316  313008 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:16:57.155334  313008 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:16:57.155362  313008 start.go:360] acquireMachinesLock for no-preload-220714: {Name:mk11c4a36d6d053e5fea4e12ab3da9129dbc9552 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:16:57.155421  313008 start.go:364] duration metric: took 41.617µs to acquireMachinesLock for "no-preload-220714"
	I1108 09:16:57.155439  313008 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:16:57.155444  313008 fix.go:54] fixHost starting: 
	I1108 09:16:57.155651  313008 cli_runner.go:164] Run: docker container inspect no-preload-220714 --format={{.State.Status}}
	I1108 09:16:57.175700  313008 fix.go:112] recreateIfNeeded on no-preload-220714: state=Stopped err=<nil>
	W1108 09:16:57.175743  313008 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:16:56.162078  310009 addons.go:515] duration metric: took 3.185230257s for enable addons: enabled=[storage-provisioner default-storageclass dashboard]
	I1108 09:16:56.163659  310009 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W1108 09:16:56.163748  310009 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I1108 09:16:56.657538  310009 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:16:56.662895  310009 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:16:56.664314  310009 api_server.go:141] control plane version: v1.28.0
	I1108 09:16:56.664342  310009 api_server.go:131] duration metric: took 507.712282ms to wait for apiserver health ...
	I1108 09:16:56.664352  310009 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:16:56.669325  310009 system_pods.go:59] 8 kube-system pods found
	I1108 09:16:56.669357  310009 system_pods.go:61] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:56.669370  310009 system_pods.go:61] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:16:56.669382  310009 system_pods.go:61] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:56.669392  310009 system_pods.go:61] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:16:56.669403  310009 system_pods.go:61] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:16:56.669412  310009 system_pods.go:61] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:56.669422  310009 system_pods.go:61] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:16:56.669430  310009 system_pods.go:61] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Running
	I1108 09:16:56.669439  310009 system_pods.go:74] duration metric: took 5.079338ms to wait for pod list to return data ...
	I1108 09:16:56.669450  310009 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:16:56.672044  310009 default_sa.go:45] found service account: "default"
	I1108 09:16:56.672065  310009 default_sa.go:55] duration metric: took 2.607999ms for default service account to be created ...
	I1108 09:16:56.672080  310009 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:16:56.676932  310009 system_pods.go:86] 8 kube-system pods found
	I1108 09:16:56.676968  310009 system_pods.go:89] "coredns-5dd5756b68-88pvx" [f0e8ae90-cdf7-445d-8db5-59f7b2d33911] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:16:56.676979  310009 system_pods.go:89] "etcd-old-k8s-version-339286" [3703076a-03e5-4648-b6ca-6061ec5c7596] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:16:56.676988  310009 system_pods.go:89] "kindnet-6d922" [f25a3fb9-ffeb-44b3-b462-966272e7b376] Running
	I1108 09:16:56.676998  310009 system_pods.go:89] "kube-apiserver-old-k8s-version-339286" [5f0d90c2-6b0e-4cc3-8b20-b20a49f26506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:16:56.677010  310009 system_pods.go:89] "kube-controller-manager-old-k8s-version-339286" [86b8a1d9-6066-45a5-9ca2-df85c6ccce00] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:16:56.677016  310009 system_pods.go:89] "kube-proxy-v4l6x" [c75d7f1b-4515-4c79-a0c2-87f23912d198] Running
	I1108 09:16:56.677023  310009 system_pods.go:89] "kube-scheduler-old-k8s-version-339286" [6538a0e7-2d3f-45d2-8c11-098f2a8b9834] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:16:56.677029  310009 system_pods.go:89] "storage-provisioner" [47335341-42b0-4e22-9609-1d629e34fc56] Running
	I1108 09:16:56.677039  310009 system_pods.go:126] duration metric: took 4.951844ms to wait for k8s-apps to be running ...
	I1108 09:16:56.677052  310009 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:16:56.677109  310009 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:16:56.694065  310009 system_svc.go:56] duration metric: took 17.006559ms WaitForService to wait for kubelet
	I1108 09:16:56.694102  310009 kubeadm.go:587] duration metric: took 3.71728037s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:16:56.694124  310009 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:16:56.698669  310009 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:16:56.698698  310009 node_conditions.go:123] node cpu capacity is 8
	I1108 09:16:56.698712  310009 node_conditions.go:105] duration metric: took 4.583573ms to run NodePressure ...
	I1108 09:16:56.698726  310009 start.go:242] waiting for startup goroutines ...
	I1108 09:16:56.698734  310009 start.go:247] waiting for cluster config update ...
	I1108 09:16:56.698748  310009 start.go:256] writing updated cluster config ...
	I1108 09:16:56.699106  310009 ssh_runner.go:195] Run: rm -f paused
	I1108 09:16:56.704275  310009 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:16:56.709216  310009 pod_ready.go:83] waiting for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:16:58.714429  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:00.715703  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	I1108 09:16:56.350377  312299 out.go:252] * Restarting existing docker container for "embed-certs-271910" ...
	I1108 09:16:56.350457  312299 cli_runner.go:164] Run: docker start embed-certs-271910
	I1108 09:16:56.659571  312299 cli_runner.go:164] Run: docker container inspect embed-certs-271910 --format={{.State.Status}}
	I1108 09:16:56.683050  312299 kic.go:430] container "embed-certs-271910" state is running.
	I1108 09:16:56.683482  312299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-271910
	I1108 09:16:56.710777  312299 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/embed-certs-271910/config.json ...
	I1108 09:16:56.711044  312299 machine.go:94] provisionDockerMachine start ...
	I1108 09:16:56.711145  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:56.734853  312299 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:56.735083  312299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1108 09:16:56.735098  312299 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:16:56.735819  312299 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:40776->127.0.0.1:33114: read: connection reset by peer
	I1108 09:16:59.863392  312299 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-271910
	
	I1108 09:16:59.863427  312299 ubuntu.go:182] provisioning hostname "embed-certs-271910"
	I1108 09:16:59.863482  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:16:59.882656  312299 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:59.882930  312299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1108 09:16:59.882950  312299 main.go:143] libmachine: About to run SSH command:
	sudo hostname embed-certs-271910 && echo "embed-certs-271910" | sudo tee /etc/hostname
	I1108 09:17:00.021490  312299 main.go:143] libmachine: SSH cmd err, output: <nil>: embed-certs-271910
	
	I1108 09:17:00.021569  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:00.041092  312299 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:00.041331  312299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1108 09:17:00.041367  312299 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-271910' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-271910/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-271910' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:17:00.169775  312299 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:17:00.169809  312299 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:17:00.169853  312299 ubuntu.go:190] setting up certificates
	I1108 09:17:00.169870  312299 provision.go:84] configureAuth start
	I1108 09:17:00.169931  312299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-271910
	I1108 09:17:00.188581  312299 provision.go:143] copyHostCerts
	I1108 09:17:00.188653  312299 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:17:00.188674  312299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:17:00.188762  312299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:17:00.188904  312299 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:17:00.188918  312299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:17:00.188962  312299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:17:00.189046  312299 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:17:00.189055  312299 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:17:00.189094  312299 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:17:00.189174  312299 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.embed-certs-271910 san=[127.0.0.1 192.168.85.2 embed-certs-271910 localhost minikube]
	I1108 09:17:00.797302  312299 provision.go:177] copyRemoteCerts
	I1108 09:17:00.797434  312299 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:17:00.797487  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:00.818779  312299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:00.913658  312299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:17:00.931761  312299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:17:00.948998  312299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:17:00.967695  312299 provision.go:87] duration metric: took 797.809921ms to configureAuth
	I1108 09:17:00.967724  312299 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:17:00.967924  312299 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:00.968054  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:00.989640  312299 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:00.989928  312299 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33114 <nil> <nil>}
	I1108 09:17:00.989949  312299 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:16:57.177769  313008 out.go:252] * Restarting existing docker container for "no-preload-220714" ...
	I1108 09:16:57.177833  313008 cli_runner.go:164] Run: docker start no-preload-220714
	I1108 09:16:57.456363  313008 cli_runner.go:164] Run: docker container inspect no-preload-220714 --format={{.State.Status}}
	I1108 09:16:57.475158  313008 kic.go:430] container "no-preload-220714" state is running.
	I1108 09:16:57.475571  313008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-220714
	I1108 09:16:57.494841  313008 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/no-preload-220714/config.json ...
	I1108 09:16:57.495091  313008 machine.go:94] provisionDockerMachine start ...
	I1108 09:16:57.495167  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:16:57.514363  313008 main.go:143] libmachine: Using SSH client type: native
	I1108 09:16:57.514594  313008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1108 09:16:57.514608  313008 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:16:57.515372  313008 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:48586->127.0.0.1:33119: read: connection reset by peer
	I1108 09:17:00.646537  313008 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-220714
	
	I1108 09:17:00.646569  313008 ubuntu.go:182] provisioning hostname "no-preload-220714"
	I1108 09:17:00.646620  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:00.665904  313008 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:00.666132  313008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1108 09:17:00.666146  313008 main.go:143] libmachine: About to run SSH command:
	sudo hostname no-preload-220714 && echo "no-preload-220714" | sudo tee /etc/hostname
	I1108 09:17:00.807606  313008 main.go:143] libmachine: SSH cmd err, output: <nil>: no-preload-220714
	
	I1108 09:17:00.807675  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:00.827648  313008 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:00.827856  313008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1108 09:17:00.827873  313008 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-220714' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-220714/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-220714' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:17:00.955843  313008 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:17:00.955883  313008 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:17:00.955923  313008 ubuntu.go:190] setting up certificates
	I1108 09:17:00.955938  313008 provision.go:84] configureAuth start
	I1108 09:17:00.956001  313008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-220714
	I1108 09:17:00.975718  313008 provision.go:143] copyHostCerts
	I1108 09:17:00.975777  313008 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:17:00.975794  313008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:17:00.975851  313008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:17:00.975963  313008 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:17:00.975974  313008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:17:00.976002  313008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:17:00.976096  313008 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:17:00.976108  313008 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:17:00.976138  313008 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:17:00.976208  313008 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.no-preload-220714 san=[127.0.0.1 192.168.94.2 localhost minikube no-preload-220714]
	I1108 09:17:01.176505  313008 provision.go:177] copyRemoteCerts
	I1108 09:17:01.176561  313008 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:17:01.176593  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.198479  313008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:01.297839  313008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:17:01.317355  313008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:17:01.336357  313008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:17:01.354873  313008 provision.go:87] duration metric: took 398.92042ms to configureAuth
	I1108 09:17:01.354902  313008 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:17:01.355091  313008 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:01.355210  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.374882  313008 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:01.375086  313008 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33119 <nil> <nil>}
	I1108 09:17:01.375102  313008 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:17:01.668598  313008 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:17:01.668625  313008 machine.go:97] duration metric: took 4.173515808s to provisionDockerMachine
	I1108 09:17:01.668640  313008 start.go:293] postStartSetup for "no-preload-220714" (driver="docker")
	I1108 09:17:01.668654  313008 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:17:01.668736  313008 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:17:01.668785  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.689396  313008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:01.785186  313008 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:17:01.789045  313008 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:17:01.789079  313008 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:17:01.789093  313008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:17:01.789161  313008 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:17:01.789274  313008 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:17:01.789472  313008 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:17:01.797651  313008 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:01.817293  313008 start.go:296] duration metric: took 148.627339ms for postStartSetup
	I1108 09:17:01.817373  313008 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:17:01.817406  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.837803  313008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:01.929097  313008 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:17:01.934048  313008 fix.go:56] duration metric: took 4.778596843s for fixHost
	I1108 09:17:01.934077  313008 start.go:83] releasing machines lock for "no-preload-220714", held for 4.778644038s
	I1108 09:17:01.934155  313008 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" no-preload-220714
	I1108 09:17:01.286387  312299 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:17:01.286415  312299 machine.go:97] duration metric: took 4.575351808s to provisionDockerMachine
	I1108 09:17:01.286426  312299 start.go:293] postStartSetup for "embed-certs-271910" (driver="docker")
	I1108 09:17:01.286438  312299 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:17:01.286491  312299 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:17:01.286522  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:01.307493  312299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:01.403050  312299 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:17:01.406949  312299 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:17:01.406978  312299 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:17:01.406991  312299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:17:01.407060  312299 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:17:01.407168  312299 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:17:01.407323  312299 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:17:01.416002  312299 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:01.434767  312299 start.go:296] duration metric: took 148.324986ms for postStartSetup
	I1108 09:17:01.434846  312299 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:17:01.434895  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:01.456193  312299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:01.549277  312299 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:17:01.554655  312299 fix.go:56] duration metric: took 5.228686425s for fixHost
	I1108 09:17:01.554698  312299 start.go:83] releasing machines lock for "embed-certs-271910", held for 5.228753321s
	I1108 09:17:01.554755  312299 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" embed-certs-271910
	I1108 09:17:01.574818  312299 ssh_runner.go:195] Run: cat /version.json
	I1108 09:17:01.574869  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:01.574898  312299 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:17:01.574959  312299 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:01.596010  312299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:01.596377  312299 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:01.690136  312299 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:01.746226  312299 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:17:01.783636  312299 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:17:01.788933  312299 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:17:01.789007  312299 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:17:01.797443  312299 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:17:01.797470  312299 start.go:496] detecting cgroup driver to use...
	I1108 09:17:01.797505  312299 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:17:01.797551  312299 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:17:01.813711  312299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:17:01.827669  312299 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:17:01.827727  312299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:17:01.843546  312299 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:17:01.856758  312299 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:17:01.942479  312299 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:17:02.045750  312299 docker.go:234] disabling docker service ...
	I1108 09:17:02.045829  312299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:17:02.062639  312299 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:17:02.077750  312299 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:17:02.178687  312299 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:17:02.259519  312299 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:17:02.272074  312299 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:17:02.287200  312299 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:17:02.287258  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.296980  312299 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:17:02.297044  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.308183  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.318785  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.329569  312299 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:17:02.337528  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.347041  312299 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.356231  312299 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.366531  312299 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:17:02.375724  312299 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:17:02.383513  312299 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:02.472447  312299 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:17:02.608591  312299 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:17:02.608709  312299 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:17:02.613725  312299 start.go:564] Will wait 60s for crictl version
	I1108 09:17:02.613778  312299 ssh_runner.go:195] Run: which crictl
	I1108 09:17:02.617599  312299 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:17:02.650517  312299 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:17:02.650646  312299 ssh_runner.go:195] Run: crio --version
	I1108 09:17:02.679796  312299 ssh_runner.go:195] Run: crio --version
	I1108 09:17:02.710973  312299 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:17:01.953633  313008 ssh_runner.go:195] Run: cat /version.json
	I1108 09:17:01.953646  313008 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:17:01.953689  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.953721  313008 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:01.983807  313008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:01.985075  313008 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:02.153263  313008 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:02.161076  313008 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:17:02.200077  313008 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:17:02.209026  313008 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:17:02.209101  313008 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:17:02.219897  313008 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:17:02.219924  313008 start.go:496] detecting cgroup driver to use...
	I1108 09:17:02.219953  313008 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:17:02.220006  313008 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:17:02.235432  313008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:17:02.247735  313008 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:17:02.247797  313008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:17:02.263028  313008 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:17:02.276533  313008 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:17:02.367468  313008 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:17:02.470569  313008 docker.go:234] disabling docker service ...
	I1108 09:17:02.470643  313008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:17:02.489570  313008 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:17:02.505022  313008 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:17:02.610682  313008 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:17:02.705650  313008 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:17:02.720197  313008 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:17:02.736949  313008 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:17:02.737005  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.746309  313008 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:17:02.746373  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.756787  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.766675  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.777860  313008 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:17:02.788494  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.798031  313008 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.808748  313008 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:02.819804  313008 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:17:02.828089  313008 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:17:02.836268  313008 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:02.931532  313008 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:17:03.070374  313008 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:17:03.070443  313008 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:17:03.074663  313008 start.go:564] Will wait 60s for crictl version
	I1108 09:17:03.074722  313008 ssh_runner.go:195] Run: which crictl
	I1108 09:17:03.078499  313008 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:17:03.105451  313008 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:17:03.105535  313008 ssh_runner.go:195] Run: crio --version
	I1108 09:17:03.139007  313008 ssh_runner.go:195] Run: crio --version
	I1108 09:17:03.187783  313008 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	
	
	==> CRI-O <==
	Nov 08 09:16:51 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:51.653213142Z" level=info msg="Starting container: 3a71a95edf9b7cb20174aac9893437c9a34cf83487f5588f23efc676bdef8c37" id=36a6268b-75a2-4a4e-973f-4f16adcca360 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:51 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:51.655126967Z" level=info msg="Started container" PID=1845 containerID=3a71a95edf9b7cb20174aac9893437c9a34cf83487f5588f23efc676bdef8c37 description=kube-system/coredns-66bc5c9577-x49dj/coredns id=36a6268b-75a2-4a4e-973f-4f16adcca360 name=/runtime.v1.RuntimeService/StartContainer sandboxID=9e5748c2567f7c577ddff82539b17db738b9647cfe6d09cb681952104daa59b1
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.733711501Z" level=info msg="Running pod sandbox: default/busybox/POD" id=fc3c18ab-fe56-40a4-93ac-9cf58035027e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.733824728Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.738504014Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:155ba4c60e3244cdabcd260d047f8cdf638fb02e4c3fe6048c6c4f0fe212ce41 UID:24063ace-e00f-4f59-99d7-9d633314fdbc NetNS:/var/run/netns/9a071133-f84a-4840-a7b9-802203a1805f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aa548}] Aliases:map[]}"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.73853249Z" level=info msg="Adding pod default_busybox to CNI network \"kindnet\" (type=ptp)"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.748176169Z" level=info msg="Got pod network &{Name:busybox Namespace:default ID:155ba4c60e3244cdabcd260d047f8cdf638fb02e4c3fe6048c6c4f0fe212ce41 UID:24063ace-e00f-4f59-99d7-9d633314fdbc NetNS:/var/run/netns/9a071133-f84a-4840-a7b9-802203a1805f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[] CgroupPath: PodAnnotations:0xc0005aa548}] Aliases:map[]}"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.748312005Z" level=info msg="Checking pod default_busybox for CNI network kindnet (type=ptp)"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.749015909Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.749831646Z" level=info msg="Ran pod sandbox 155ba4c60e3244cdabcd260d047f8cdf638fb02e4c3fe6048c6c4f0fe212ce41 with infra container: default/busybox/POD" id=fc3c18ab-fe56-40a4-93ac-9cf58035027e name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.751248896Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=dfa1321b-7f4d-4b5a-a582-8c7e2ef7dc20 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.751390627Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=dfa1321b-7f4d-4b5a-a582-8c7e2ef7dc20 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.751453222Z" level=info msg="Neither image nor artfiact gcr.io/k8s-minikube/busybox:1.28.4-glibc found" id=dfa1321b-7f4d-4b5a-a582-8c7e2ef7dc20 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.752132712Z" level=info msg="Pulling image: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=556f68e4-24fb-4b34-9176-4ceff4e7aacf name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:54 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:54.755016723Z" level=info msg="Trying to access \"gcr.io/k8s-minikube/busybox:1.28.4-glibc\""
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.25104213Z" level=info msg="Pulled image: gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998" id=556f68e4-24fb-4b34-9176-4ceff4e7aacf name=/runtime.v1.ImageService/PullImage
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.252338243Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=44d53998-d062-4ab3-b7c6-bc42d796e4fd name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.253867512Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=5bbbbb04-4e4a-4cbd-8937-8c77358afe4f name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.257123943Z" level=info msg="Creating container: default/busybox/busybox" id=42a09871-c562-4c0c-857f-7701099c6be6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.257253736Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.261985733Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.265672649Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.299380196Z" level=info msg="Created container 2a42533a7b1863c6386a91259b435ccd30adce4ac7868d2d1488aa8e40b8dab3: default/busybox/busybox" id=42a09871-c562-4c0c-857f-7701099c6be6 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.300015228Z" level=info msg="Starting container: 2a42533a7b1863c6386a91259b435ccd30adce4ac7868d2d1488aa8e40b8dab3" id=ed7f8421-8a71-4e7e-b7c6-a509be33824b name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:16:56 default-k8s-diff-port-677902 crio[779]: time="2025-11-08T09:16:56.301932603Z" level=info msg="Started container" PID=1921 containerID=2a42533a7b1863c6386a91259b435ccd30adce4ac7868d2d1488aa8e40b8dab3 description=default/busybox/busybox id=ed7f8421-8a71-4e7e-b7c6-a509be33824b name=/runtime.v1.RuntimeService/StartContainer sandboxID=155ba4c60e3244cdabcd260d047f8cdf638fb02e4c3fe6048c6c4f0fe212ce41
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                                    NAMESPACE
	2a42533a7b186       gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998   7 seconds ago       Running             busybox                   0                   155ba4c60e324       busybox                                                default
	3a71a95edf9b7       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                      12 seconds ago      Running             coredns                   0                   9e5748c2567f7       coredns-66bc5c9577-x49dj                               kube-system
	6bea5985caa31       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 seconds ago      Running             storage-provisioner       0                   5802f11b28faf       storage-provisioner                                    kube-system
	cd568bba1e6e2       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                      23 seconds ago      Running             kube-proxy                0                   2e5868a52e85d       kube-proxy-5d9f2                                       kube-system
	d959ec03cda92       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                      23 seconds ago      Running             kindnet-cni               0                   06e3348b3a69f       kindnet-x89ph                                          kube-system
	534473281a551       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                      33 seconds ago      Running             kube-apiserver            0                   d87fe54cfd4ee       kube-apiserver-default-k8s-diff-port-677902            kube-system
	b85861971c340       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                      33 seconds ago      Running             kube-controller-manager   0                   d49cf10ab00c0       kube-controller-manager-default-k8s-diff-port-677902   kube-system
	6bcab0b3e9184       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                      33 seconds ago      Running             kube-scheduler            0                   15f713d2060c6       kube-scheduler-default-k8s-diff-port-677902            kube-system
	ccae7ae5173bb       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                      33 seconds ago      Running             etcd                      0                   407ebe7381e2e       etcd-default-k8s-diff-port-677902                      kube-system
	
	
	==> coredns [3a71a95edf9b7cb20174aac9893437c9a34cf83487f5588f23efc676bdef8c37] <==
	maxprocs: Leaving GOMAXPROCS=8: CPU quota undefined
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:38703 - 33187 "HINFO IN 7944539064892061102.9032368206402769577. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.047995576s
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-677902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-677902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-677902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-677902
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:16:54 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:16:51 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:16:51 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:16:51 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:16:51 +0000   Sat, 08 Nov 2025 09:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-677902
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9a73a23a-0cc4-4911-a4ee-3b28faba34c9
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         10s
	  kube-system                 coredns-66bc5c9577-x49dj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     24s
	  kube-system                 etcd-default-k8s-diff-port-677902                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         30s
	  kube-system                 kindnet-x89ph                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      24s
	  kube-system                 kube-apiserver-default-k8s-diff-port-677902             250m (3%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-677902    200m (2%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 kube-proxy-5d9f2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	  kube-system                 kube-scheduler-default-k8s-diff-port-677902             100m (1%)     0 (0%)      0 (0%)           0 (0%)         30s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         24s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23s                kube-proxy       
	  Normal  Starting                 35s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  34s (x8 over 35s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    34s (x8 over 35s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     34s (x8 over 35s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s                kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s                kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s                kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           25s                node-controller  Node default-k8s-diff-port-677902 event: Registered Node default-k8s-diff-port-677902 in Controller
	  Normal  NodeReady                13s                kubelet          Node default-k8s-diff-port-677902 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [ccae7ae5173bb8298887a586fade9c4a107e665b8148cbd1bab40d3daca31ca9] <==
	{"level":"warn","ts":"2025-11-08T09:16:31.480038Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60826","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.488350Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.494679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60860","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.501904Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60878","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.508202Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60906","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.515372Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60936","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.521997Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.528612Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60966","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.535679Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60978","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.548418Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60994","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.554794Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:60998","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.561079Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.567420Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32820","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.578057Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.587529Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32844","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.594557Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32864","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.601723Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.608435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.615114Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32918","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.621919Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.628768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32954","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.646151Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:32986","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.652263Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33006","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.658344Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33024","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:16:31.702768Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:33046","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:17:04 up 59 min,  0 user,  load average: 4.86, 3.92, 2.50
	Linux default-k8s-diff-port-677902 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [d959ec03cda921b0edbcac79b3915b996883eceb70e26e21dd9310c6107d2455] <==
	I1108 09:16:40.793776       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:40.794042       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:16:40.794227       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:40.794252       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:40.794307       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:40Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:40.994857       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:40.994978       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:40.995001       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:40.995201       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:41.395367       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:41.395402       1 metrics.go:72] Registering metrics
	I1108 09:16:41.395462       1 controller.go:711] "Syncing nftables rules"
	I1108 09:16:50.995553       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:16:50.995633       1 main.go:301] handling current node
	I1108 09:17:00.997384       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:17:00.997426       1 main.go:301] handling current node
	
	
	==> kube-apiserver [534473281a55174e613f298e4f6ae573b552f8d37d84fb7c07ed952f31857694] <==
	I1108 09:16:32.202613       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:16:32.206681       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:32.206822       1 default_servicecidr_controller.go:228] Setting default ServiceCIDR condition Ready to True
	I1108 09:16:32.211550       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:32.211902       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:16:32.219355       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:16:32.223149       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:16:33.105491       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:16:33.109622       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:16:33.109639       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:16:33.592698       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:16:33.630409       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:16:33.711602       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:16:33.717848       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.76.2]
	I1108 09:16:33.718739       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:16:33.727094       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:16:34.128646       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:16:34.518863       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:16:34.528356       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:16:34.535876       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:16:39.883242       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:39.887014       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:16:40.137583       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:16:40.231376       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	E1108 09:17:02.519299       1 conn.go:339] Error on socket receive: read tcp 192.168.76.2:8444->192.168.76.1:35746: use of closed network connection
	
	
	==> kube-controller-manager [b85861971c340739d48e4293204594b44a116078f31d5c8a367cc170169f93f2] <==
	I1108 09:16:39.127159       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:16:39.127228       1 shared_informer.go:356] "Caches are synced" controller="PVC protection"
	I1108 09:16:39.127349       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:16:39.127932       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-677902"
	I1108 09:16:39.128066       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:16:39.128083       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:16:39.128796       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:16:39.128145       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:16:39.128989       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:16:39.128157       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:16:39.128171       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:16:39.130381       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:16:39.130554       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:16:39.130749       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:16:39.131358       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:16:39.132103       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="default-k8s-diff-port-677902" podCIDRs=["10.244.0.0/24"]
	I1108 09:16:39.132494       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:16:39.136486       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:16:39.148178       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:16:39.173968       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:16:39.176884       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:16:39.176990       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:16:39.177005       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	E1108 09:16:40.381205       1 replica_set.go:587] "Unhandled Error" err="sync \"kube-system/coredns-66bc5c9577\" failed with Operation cannot be fulfilled on replicasets.apps \"coredns-66bc5c9577\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I1108 09:16:54.132698       1 node_lifecycle_controller.go:1044] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [cd568bba1e6e211049657689f5ae3212f4f8d5f544560af8bca465ba9cde7f49] <==
	I1108 09:16:40.680533       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:16:40.751942       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:16:40.852760       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:16:40.852796       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:16:40.852869       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:16:40.873502       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:40.873625       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:16:40.879409       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:16:40.879889       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:16:40.879966       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:40.882789       1 config.go:309] "Starting node config controller"
	I1108 09:16:40.882858       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:16:40.882871       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:16:40.882802       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:16:40.882881       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:16:40.882920       1 config.go:200] "Starting service config controller"
	I1108 09:16:40.882931       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:16:40.882955       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:16:40.882963       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:16:40.983363       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:16:40.983379       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:16:40.983407       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [6bcab0b3e91841842a4e26e9063f120d6a7bc2b07b89f9948e8eb8baefad7d7d] <==
	E1108 09:16:32.162754       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:16:32.163600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:16:32.163600       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:16:32.163619       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:16:32.163618       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:32.163669       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:32.163677       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:32.163718       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:16:32.163824       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:16:32.163899       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:16:32.984543       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	E1108 09:16:33.005730       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:16:33.022967       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:16:33.034964       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:16:33.044038       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:16:33.079790       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:16:33.099043       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:16:33.151634       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:16:33.198405       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:16:33.275882       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:16:33.304103       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:16:33.323154       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:16:33.358462       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:16:33.375688       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	I1108 09:16:35.758564       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:16:35 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:35.404648    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-default-k8s-diff-port-677902" podStartSLOduration=1.404627602 podStartE2EDuration="1.404627602s" podCreationTimestamp="2025-11-08 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:35.40458943 +0000 UTC m=+1.124555405" watchObservedRunningTime="2025-11-08 09:16:35.404627602 +0000 UTC m=+1.124593574"
	Nov 08 09:16:35 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:35.423948    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-default-k8s-diff-port-677902" podStartSLOduration=1.423928901 podStartE2EDuration="1.423928901s" podCreationTimestamp="2025-11-08 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:35.414798678 +0000 UTC m=+1.134764650" watchObservedRunningTime="2025-11-08 09:16:35.423928901 +0000 UTC m=+1.143894877"
	Nov 08 09:16:35 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:35.432450    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-default-k8s-diff-port-677902" podStartSLOduration=1.4324272009999999 podStartE2EDuration="1.432427201s" podCreationTimestamp="2025-11-08 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:35.424025763 +0000 UTC m=+1.143991736" watchObservedRunningTime="2025-11-08 09:16:35.432427201 +0000 UTC m=+1.152393166"
	Nov 08 09:16:35 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:35.432675    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-default-k8s-diff-port-677902" podStartSLOduration=1.432663099 podStartE2EDuration="1.432663099s" podCreationTimestamp="2025-11-08 09:16:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:35.432621269 +0000 UTC m=+1.152587229" watchObservedRunningTime="2025-11-08 09:16:35.432663099 +0000 UTC m=+1.152629072"
	Nov 08 09:16:39 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:39.194828    1322 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Nov 08 09:16:39 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:39.195621    1322 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292440    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5f49623a-57d7-4854-8c1b-b4ca027bd24c-lib-modules\") pod \"kindnet-x89ph\" (UID: \"5f49623a-57d7-4854-8c1b-b4ca027bd24c\") " pod="kube-system/kindnet-x89ph"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292498    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e880f62e-f713-4254-98e7-84f3941024f0-kube-proxy\") pod \"kube-proxy-5d9f2\" (UID: \"e880f62e-f713-4254-98e7-84f3941024f0\") " pod="kube-system/kube-proxy-5d9f2"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292522    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e880f62e-f713-4254-98e7-84f3941024f0-lib-modules\") pod \"kube-proxy-5d9f2\" (UID: \"e880f62e-f713-4254-98e7-84f3941024f0\") " pod="kube-system/kube-proxy-5d9f2"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292552    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9ds\" (UniqueName: \"kubernetes.io/projected/e880f62e-f713-4254-98e7-84f3941024f0-kube-api-access-6w9ds\") pod \"kube-proxy-5d9f2\" (UID: \"e880f62e-f713-4254-98e7-84f3941024f0\") " pod="kube-system/kube-proxy-5d9f2"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292590    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5f49623a-57d7-4854-8c1b-b4ca027bd24c-xtables-lock\") pod \"kindnet-x89ph\" (UID: \"5f49623a-57d7-4854-8c1b-b4ca027bd24c\") " pod="kube-system/kindnet-x89ph"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292647    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/5f49623a-57d7-4854-8c1b-b4ca027bd24c-cni-cfg\") pod \"kindnet-x89ph\" (UID: \"5f49623a-57d7-4854-8c1b-b4ca027bd24c\") " pod="kube-system/kindnet-x89ph"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292669    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9lqv\" (UniqueName: \"kubernetes.io/projected/5f49623a-57d7-4854-8c1b-b4ca027bd24c-kube-api-access-r9lqv\") pod \"kindnet-x89ph\" (UID: \"5f49623a-57d7-4854-8c1b-b4ca027bd24c\") " pod="kube-system/kindnet-x89ph"
	Nov 08 09:16:40 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:40.292692    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e880f62e-f713-4254-98e7-84f3941024f0-xtables-lock\") pod \"kube-proxy-5d9f2\" (UID: \"e880f62e-f713-4254-98e7-84f3941024f0\") " pod="kube-system/kube-proxy-5d9f2"
	Nov 08 09:16:41 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:41.406479    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-x89ph" podStartSLOduration=1.406456426 podStartE2EDuration="1.406456426s" podCreationTimestamp="2025-11-08 09:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:41.406430006 +0000 UTC m=+7.126395981" watchObservedRunningTime="2025-11-08 09:16:41.406456426 +0000 UTC m=+7.126422399"
	Nov 08 09:16:41 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:41.416305    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5d9f2" podStartSLOduration=1.416268778 podStartE2EDuration="1.416268778s" podCreationTimestamp="2025-11-08 09:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:41.416250394 +0000 UTC m=+7.136216367" watchObservedRunningTime="2025-11-08 09:16:41.416268778 +0000 UTC m=+7.136234750"
	Nov 08 09:16:51 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:51.262017    1322 kubelet_node_status.go:439] "Fast updating node status as it just became ready"
	Nov 08 09:16:51 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:51.374818    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6r6v\" (UniqueName: \"kubernetes.io/projected/00375859-41ff-4f26-b07f-73a5d30e46ee-kube-api-access-d6r6v\") pod \"storage-provisioner\" (UID: \"00375859-41ff-4f26-b07f-73a5d30e46ee\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:51 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:51.374954    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjs6j\" (UniqueName: \"kubernetes.io/projected/ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d-kube-api-access-mjs6j\") pod \"coredns-66bc5c9577-x49dj\" (UID: \"ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d\") " pod="kube-system/coredns-66bc5c9577-x49dj"
	Nov 08 09:16:51 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:51.375045    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/00375859-41ff-4f26-b07f-73a5d30e46ee-tmp\") pod \"storage-provisioner\" (UID: \"00375859-41ff-4f26-b07f-73a5d30e46ee\") " pod="kube-system/storage-provisioner"
	Nov 08 09:16:51 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:51.375091    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d-config-volume\") pod \"coredns-66bc5c9577-x49dj\" (UID: \"ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d\") " pod="kube-system/coredns-66bc5c9577-x49dj"
	Nov 08 09:16:52 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:52.435033    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-x49dj" podStartSLOduration=12.435013332 podStartE2EDuration="12.435013332s" podCreationTimestamp="2025-11-08 09:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:52.434846276 +0000 UTC m=+18.154812248" watchObservedRunningTime="2025-11-08 09:16:52.435013332 +0000 UTC m=+18.154979306"
	Nov 08 09:16:52 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:52.455955    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=12.4559324 podStartE2EDuration="12.4559324s" podCreationTimestamp="2025-11-08 09:16:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:16:52.445592191 +0000 UTC m=+18.165558164" watchObservedRunningTime="2025-11-08 09:16:52.4559324 +0000 UTC m=+18.175898373"
	Nov 08 09:16:54 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:54.494326    1322 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgr5g\" (UniqueName: \"kubernetes.io/projected/24063ace-e00f-4f59-99d7-9d633314fdbc-kube-api-access-mgr5g\") pod \"busybox\" (UID: \"24063ace-e00f-4f59-99d7-9d633314fdbc\") " pod="default/busybox"
	Nov 08 09:16:56 default-k8s-diff-port-677902 kubelet[1322]: I1108 09:16:56.444841    1322 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/busybox" podStartSLOduration=0.943366085 podStartE2EDuration="2.44482293s" podCreationTimestamp="2025-11-08 09:16:54 +0000 UTC" firstStartedPulling="2025-11-08 09:16:54.751743947 +0000 UTC m=+20.471709900" lastFinishedPulling="2025-11-08 09:16:56.253200777 +0000 UTC m=+21.973166745" observedRunningTime="2025-11-08 09:16:56.44442157 +0000 UTC m=+22.164387543" watchObservedRunningTime="2025-11-08 09:16:56.44482293 +0000 UTC m=+22.164788900"
	
	
	==> storage-provisioner [6bea5985caa3163693313681e4ed7cbce394961204f813b4ae3d608240e6de7f] <==
	I1108 09:16:51.664993       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:16:51.682463       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:16:51.682532       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:16:51.685274       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:51.690111       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:51.690494       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:16:51.690682       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_c203b18b-177f-43fe-8fc1-c508bfcc03eb!
	I1108 09:16:51.690685       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a8f9c03-6b30-4ca5-a9cb-a97fbf27f9a3", APIVersion:"v1", ResourceVersion:"441", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-677902_c203b18b-177f-43fe-8fc1-c508bfcc03eb became leader
	W1108 09:16:51.692633       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:51.699276       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:16:51.791299       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_c203b18b-177f-43fe-8fc1-c508bfcc03eb!
	W1108 09:16:53.702759       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:53.707271       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:55.711434       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:55.717522       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:57.722071       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:57.728159       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:59.731880       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:16:59.735974       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:01.738992       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:01.744220       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:03.748919       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:03.757805       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (2.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-339286 --alsologtostderr -v=1
E1108 09:17:49.102390    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.108776    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.120142    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.141561    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.183136    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.265180    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.426572    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:49.748555    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-339286 --alsologtostderr -v=1: exit status 80 (2.368455262s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-339286 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:48.986231  322035 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:48.986543  322035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:48.986555  322035 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:48.986562  322035 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:48.986792  322035 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:48.987050  322035 out.go:368] Setting JSON to false
	I1108 09:17:48.987118  322035 mustload.go:66] Loading cluster: old-k8s-version-339286
	I1108 09:17:48.987495  322035 config.go:182] Loaded profile config "old-k8s-version-339286": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.0
	I1108 09:17:48.987896  322035 cli_runner.go:164] Run: docker container inspect old-k8s-version-339286 --format={{.State.Status}}
	I1108 09:17:49.006729  322035 host.go:66] Checking if "old-k8s-version-339286" exists ...
	I1108 09:17:49.006983  322035 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:49.064598  322035 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:86 SystemTime:2025-11-08 09:17:49.053480057 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:49.065359  322035 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old-k8s-version-339286 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=
true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:17:49.067124  322035 out.go:179] * Pausing node old-k8s-version-339286 ... 
	I1108 09:17:49.068199  322035 host.go:66] Checking if "old-k8s-version-339286" exists ...
	I1108 09:17:49.068518  322035 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:49.068560  322035 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-339286
	I1108 09:17:49.087887  322035 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33109 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/old-k8s-version-339286/id_rsa Username:docker}
	I1108 09:17:49.180893  322035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:49.204709  322035 pause.go:52] kubelet running: true
	I1108 09:17:49.204782  322035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:49.374183  322035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:49.374274  322035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:49.444481  322035 cri.go:89] found id: "a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5"
	I1108 09:17:49.444512  322035 cri.go:89] found id: "b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c"
	I1108 09:17:49.444518  322035 cri.go:89] found id: "7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30"
	I1108 09:17:49.444524  322035 cri.go:89] found id: "40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	I1108 09:17:49.444528  322035 cri.go:89] found id: "7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b"
	I1108 09:17:49.444533  322035 cri.go:89] found id: "05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531"
	I1108 09:17:49.444537  322035 cri.go:89] found id: "b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d"
	I1108 09:17:49.444541  322035 cri.go:89] found id: "98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9"
	I1108 09:17:49.444545  322035 cri.go:89] found id: "5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653"
	I1108 09:17:49.444553  322035 cri.go:89] found id: "a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	I1108 09:17:49.444558  322035 cri.go:89] found id: "03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd"
	I1108 09:17:49.444563  322035 cri.go:89] found id: ""
	I1108 09:17:49.444604  322035 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:49.456997  322035 retry.go:31] will retry after 297.977376ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:49Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:49.755565  322035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:49.768332  322035 pause.go:52] kubelet running: false
	I1108 09:17:49.768400  322035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:49.914551  322035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:49.914673  322035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:49.981300  322035 cri.go:89] found id: "a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5"
	I1108 09:17:49.981321  322035 cri.go:89] found id: "b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c"
	I1108 09:17:49.981325  322035 cri.go:89] found id: "7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30"
	I1108 09:17:49.981329  322035 cri.go:89] found id: "40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	I1108 09:17:49.981331  322035 cri.go:89] found id: "7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b"
	I1108 09:17:49.981334  322035 cri.go:89] found id: "05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531"
	I1108 09:17:49.981337  322035 cri.go:89] found id: "b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d"
	I1108 09:17:49.981339  322035 cri.go:89] found id: "98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9"
	I1108 09:17:49.981342  322035 cri.go:89] found id: "5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653"
	I1108 09:17:49.981346  322035 cri.go:89] found id: "a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	I1108 09:17:49.981349  322035 cri.go:89] found id: "03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd"
	I1108 09:17:49.981351  322035 cri.go:89] found id: ""
	I1108 09:17:49.981406  322035 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:49.993237  322035 retry.go:31] will retry after 218.919325ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:49Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:50.212702  322035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:50.226628  322035 pause.go:52] kubelet running: false
	I1108 09:17:50.226692  322035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:50.370206  322035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:50.370297  322035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:50.437302  322035 cri.go:89] found id: "a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5"
	I1108 09:17:50.437325  322035 cri.go:89] found id: "b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c"
	I1108 09:17:50.437329  322035 cri.go:89] found id: "7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30"
	I1108 09:17:50.437332  322035 cri.go:89] found id: "40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	I1108 09:17:50.437342  322035 cri.go:89] found id: "7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b"
	I1108 09:17:50.437346  322035 cri.go:89] found id: "05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531"
	I1108 09:17:50.437349  322035 cri.go:89] found id: "b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d"
	I1108 09:17:50.437352  322035 cri.go:89] found id: "98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9"
	I1108 09:17:50.437354  322035 cri.go:89] found id: "5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653"
	I1108 09:17:50.437360  322035 cri.go:89] found id: "a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	I1108 09:17:50.437362  322035 cri.go:89] found id: "03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd"
	I1108 09:17:50.437372  322035 cri.go:89] found id: ""
	I1108 09:17:50.437413  322035 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:50.449191  322035 retry.go:31] will retry after 517.708558ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:50Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:50.968002  322035 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:50.981270  322035 pause.go:52] kubelet running: false
	I1108 09:17:50.981365  322035 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:51.127116  322035 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:51.127199  322035 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:51.197923  322035 cri.go:89] found id: "a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5"
	I1108 09:17:51.197950  322035 cri.go:89] found id: "b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c"
	I1108 09:17:51.197955  322035 cri.go:89] found id: "7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30"
	I1108 09:17:51.197959  322035 cri.go:89] found id: "40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	I1108 09:17:51.197963  322035 cri.go:89] found id: "7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b"
	I1108 09:17:51.197967  322035 cri.go:89] found id: "05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531"
	I1108 09:17:51.197970  322035 cri.go:89] found id: "b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d"
	I1108 09:17:51.197974  322035 cri.go:89] found id: "98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9"
	I1108 09:17:51.197977  322035 cri.go:89] found id: "5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653"
	I1108 09:17:51.197992  322035 cri.go:89] found id: "a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	I1108 09:17:51.197996  322035 cri.go:89] found id: "03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd"
	I1108 09:17:51.198000  322035 cri.go:89] found id: ""
	I1108 09:17:51.198049  322035 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:51.233903  322035 out.go:203] 
	W1108 09:17:51.265205  322035 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:51Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:51.265232  322035 out.go:285] * 
	* 
	W1108 09:17:51.269696  322035 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:51.294933  322035 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p old-k8s-version-339286 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-339286
helpers_test.go:243: (dbg) docker inspect old-k8s-version-339286:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	        "Created": "2025-11-08T09:15:31.664105217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:46.254380446Z",
	            "FinishedAt": "2025-11-08T09:16:45.343821845Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hosts",
	        "LogPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb-json.log",
	        "Name": "/old-k8s-version-339286",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-339286:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-339286",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	                "LowerDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-339286",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-339286/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-339286",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "033e7cde3fb4e483fe2e2664daeb4785bb6efc694044030eaec42029ee59f8e2",
	            "SandboxKey": "/var/run/docker/netns/033e7cde3fb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-339286": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:4b:28:3a:a3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "111659f5c16fa8de648fbd4b0737819906b512d8974c73538f9c6cac58753ac3",
	                    "EndpointID": "e243ebb2bc7bb3196b05953fdbec26d90759d630f6a8b2fca912172f35601c89",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-339286",
	                        "ce364047d86b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286: exit status 2 (330.041191ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25
E1108 09:17:51.672873    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25: (1.267510379s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                                                                                               │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:23.014181  318772 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:23.014490  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014501  318772 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:23.014506  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014688  318772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:23.015160  318772 out.go:368] Setting JSON to false
	I1108 09:17:23.016473  318772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3594,"bootTime":1762589849,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:23.016562  318772 start.go:143] virtualization: kvm guest
	I1108 09:17:23.018650  318772 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:23.020167  318772 notify.go:221] Checking for updates...
	I1108 09:17:23.020234  318772 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:23.021653  318772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:23.023193  318772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:23.024687  318772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:23.026129  318772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:23.027502  318772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:23.029342  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:23.029838  318772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:23.055123  318772 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:23.055259  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.110228  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.100330014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.110439  318772 docker.go:319] overlay module found
	I1108 09:17:23.112516  318772 out.go:179] * Using the docker driver based on existing profile
	I1108 09:17:23.113842  318772 start.go:309] selected driver: docker
	I1108 09:17:23.113858  318772 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.113935  318772 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:23.114523  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.170233  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.160701234 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.170557  318772 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:23.170587  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:23.170630  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:23.170681  318772 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.173037  318772 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:17:23.174652  318772 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:23.176085  318772 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:23.177478  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:23.177520  318772 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:23.177527  318772 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:23.177553  318772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:23.177617  318772 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:23.177632  318772 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:23.177725  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.200331  318772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:23.200356  318772 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:23.200379  318772 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:23.200409  318772 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:23.200478  318772 start.go:364] duration metric: took 44.108µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:17:23.200502  318772 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:17:23.200508  318772 fix.go:54] fixHost starting: 
	I1108 09:17:23.200797  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.222078  318772 fix.go:112] recreateIfNeeded on default-k8s-diff-port-677902: state=Stopped err=<nil>
	W1108 09:17:23.222126  318772 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 09:17:23.215019  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:25.215267  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:21.423354  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:23.921916  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:25.922381  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:22.022599  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:24.467748  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:26.467970  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:23.223920  318772 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-677902" ...
	I1108 09:17:23.224026  318772 cli_runner.go:164] Run: docker start default-k8s-diff-port-677902
	I1108 09:17:23.517410  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.541523  318772 kic.go:430] container "default-k8s-diff-port-677902" state is running.
	I1108 09:17:23.542096  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:23.566822  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.567040  318772 machine.go:94] provisionDockerMachine start ...
	I1108 09:17:23.567111  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:23.587476  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:23.587789  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:23.587807  318772 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:17:23.588482  318772 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43940->127.0.0.1:33124: read: connection reset by peer
	I1108 09:17:26.720488  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.720521  318772 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:17:26.720581  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.739702  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.739910  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.739923  318772 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:17:26.879756  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.879827  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.900874  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.901124  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.901145  318772 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:17:27.030475  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:17:27.030504  318772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:17:27.030544  318772 ubuntu.go:190] setting up certificates
	I1108 09:17:27.030558  318772 provision.go:84] configureAuth start
	I1108 09:17:27.030617  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.049655  318772 provision.go:143] copyHostCerts
	I1108 09:17:27.049718  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:17:27.049734  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:17:27.049821  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:17:27.049958  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:17:27.049978  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:17:27.050022  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:17:27.050114  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:17:27.050123  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:17:27.050149  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:17:27.050225  318772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:17:27.218430  318772 provision.go:177] copyRemoteCerts
	I1108 09:17:27.218485  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:17:27.218517  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.238620  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.334066  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:17:27.353472  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:17:27.371621  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:17:27.389736  318772 provision.go:87] duration metric: took 359.161729ms to configureAuth
	I1108 09:17:27.389766  318772 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:17:27.389969  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:27.390099  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.408638  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:27.408840  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:27.408855  318772 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:17:27.700508  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:17:27.700535  318772 machine.go:97] duration metric: took 4.133482649s to provisionDockerMachine
	I1108 09:17:27.700549  318772 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:17:27.700562  318772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:17:27.700637  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:17:27.700708  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.722016  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.818358  318772 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:17:27.822257  318772 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:17:27.822295  318772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:17:27.822309  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:17:27.822368  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:17:27.822472  318772 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:17:27.822590  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:17:27.830681  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:27.849577  318772 start.go:296] duration metric: took 149.013814ms for postStartSetup
	I1108 09:17:27.849653  318772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:17:27.849714  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.869059  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.960711  318772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:17:27.965862  318772 fix.go:56] duration metric: took 4.765347999s for fixHost
	I1108 09:17:27.965889  318772 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 4.765396741s
	I1108 09:17:27.965955  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.984988  318772 ssh_runner.go:195] Run: cat /version.json
	I1108 09:17:27.985031  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.985093  318772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:17:27.985177  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:28.004610  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.004907  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.149001  318772 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:28.155580  318772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:17:28.192252  318772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:17:28.197116  318772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:17:28.197175  318772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:17:28.205203  318772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:17:28.205224  318772 start.go:496] detecting cgroup driver to use...
	I1108 09:17:28.205255  318772 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:17:28.205303  318772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:17:28.220826  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:17:28.234319  318772 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:17:28.234394  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:17:28.249292  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:17:28.262217  318772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:17:28.343998  318772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:17:28.425777  318772 docker.go:234] disabling docker service ...
	I1108 09:17:28.425843  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:17:28.440815  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:17:28.455138  318772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:17:28.537601  318772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:17:28.622788  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:17:28.635585  318772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:17:28.649621  318772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:17:28.649672  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.659171  318772 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:17:28.659244  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.668583  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.677393  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.686251  318772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:17:28.694982  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.704557  318772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.713519  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.723588  318772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:17:28.731786  318772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:17:28.739658  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:28.823880  318772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:17:28.925939  318772 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:17:28.926009  318772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:17:28.930260  318772 start.go:564] Will wait 60s for crictl version
	I1108 09:17:28.930332  318772 ssh_runner.go:195] Run: which crictl
	I1108 09:17:28.934146  318772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:17:28.959101  318772 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:17:28.959184  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:28.987183  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:29.017768  318772 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:17:29.019019  318772 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:29.036798  318772 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:17:29.041036  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.051759  318772 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:17:29.051887  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:29.051933  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.084447  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.084468  318772 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:17:29.084512  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.110976  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.111002  318772 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:17:29.111018  318772 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:17:29.111172  318772 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:17:29.111249  318772 ssh_runner.go:195] Run: crio config
	I1108 09:17:29.155244  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:29.155266  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:29.155307  318772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:17:29.155338  318772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:17:29.155495  318772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:17:29.155551  318772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:17:29.163669  318772 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:17:29.163736  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:17:29.171252  318772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:17:29.184573  318772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:17:29.196971  318772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:17:29.209695  318772 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:17:29.213735  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.224550  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:29.306727  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:29.333961  318772 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:17:29.333990  318772 certs.go:195] generating shared ca certs ...
	I1108 09:17:29.334022  318772 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:29.334192  318772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:17:29.334258  318772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:17:29.334275  318772 certs.go:257] generating profile certs ...
	I1108 09:17:29.334443  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:17:29.334517  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:17:29.334567  318772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:17:29.334703  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:17:29.334750  318772 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:17:29.334763  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:17:29.334800  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:17:29.334836  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:17:29.334868  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:17:29.334923  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:29.335755  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:17:29.358546  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:17:29.382353  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:17:29.403720  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:17:29.426530  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:17:29.450442  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:17:29.471845  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:17:29.489173  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:17:29.506582  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:17:29.524071  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:17:29.543268  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:17:29.561916  318772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:17:29.574785  318772 ssh_runner.go:195] Run: openssl version
	I1108 09:17:29.581198  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:17:29.590123  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593890  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593942  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.629344  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:17:29.637798  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:17:29.646788  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650810  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650886  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.686144  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:17:29.694870  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:17:29.704343  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708244  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708301  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.747154  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:17:29.756245  318772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:17:29.760208  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:17:29.798830  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:17:29.835366  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:17:29.881735  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:17:29.926935  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:17:29.975380  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:17:30.025917  318772 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:30.026024  318772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:30.026120  318772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:30.057401  318772 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:17:30.057427  318772 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:17:30.057433  318772 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:17:30.057439  318772 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:17:30.057447  318772 cri.go:89] found id: ""
	I1108 09:17:30.057485  318772 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:17:30.069676  318772 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:30.069736  318772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:17:30.078414  318772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:17:30.078433  318772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:17:30.078477  318772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:17:30.086093  318772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:17:30.087564  318772 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-677902" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.088577  318772 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5860/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-677902" cluster setting kubeconfig missing "default-k8s-diff-port-677902" context setting]
	I1108 09:17:30.089991  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.092252  318772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:17:30.100764  318772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:17:30.100804  318772 kubeadm.go:602] duration metric: took 22.36077ms to restartPrimaryControlPlane
	I1108 09:17:30.100814  318772 kubeadm.go:403] duration metric: took 74.907828ms to StartCluster
	I1108 09:17:30.100831  318772 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.100935  318772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.103426  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.103692  318772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:30.103761  318772 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:17:30.103862  318772 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.103881  318772 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.103890  318772 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:17:30.103917  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.103945  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:30.103995  318772 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104069  318772 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104010  318772 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-677902"
	I1108 09:17:30.104098  318772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-677902"
	W1108 09:17:30.104104  318772 addons.go:248] addon dashboard should already be in state true
	I1108 09:17:30.104134  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.104426  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104485  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104734  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.126565  318772 out.go:179] * Verifying Kubernetes components...
	I1108 09:17:30.128137  318772 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.128160  318772 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:17:30.128186  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.128648  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.129880  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:30.131252  318772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:17:30.131276  318772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:17:30.134171  318772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.134193  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:17:30.134249  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.134885  318772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1108 09:17:27.715578  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:30.215915  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:27.923031  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:29.925443  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:28.968151  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:31.470050  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:30.138683  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:17:30.138707  318772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:17:30.138768  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.155528  318772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.155552  318772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:17:30.155610  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.159215  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.161996  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.184265  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.283069  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:17:30.283103  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:17:30.283635  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.294542  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:30.295994  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.301109  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:17:30.301130  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:17:30.321171  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:17:30.321197  318772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:17:30.339306  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:17:30.339332  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:17:30.353887  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:17:30.353939  318772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:17:30.367921  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:17:30.367943  318772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:17:30.380743  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:17:30.380768  318772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:17:30.393662  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:17:30.393688  318772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:17:30.407461  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:30.407490  318772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:17:30.422749  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:32.507801  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.224127884s)
	I1108 09:17:32.507827  318772 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.213247429s)
	I1108 09:17:32.507867  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.211842649s)
	I1108 09:17:32.507875  318772 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.508003  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.085193451s)
	I1108 09:17:32.510165  318772 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-677902 addons enable metrics-server
	
	I1108 09:17:32.518886  318772 node_ready.go:49] node "default-k8s-diff-port-677902" is "Ready"
	I1108 09:17:32.518917  318772 node_ready.go:38] duration metric: took 11.026405ms for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.518932  318772 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:17:32.518979  318772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:17:32.524408  318772 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:17:32.525554  318772 addons.go:515] duration metric: took 2.421802346s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:17:32.534116  318772 api_server.go:72] duration metric: took 2.430387553s to wait for apiserver process to appear ...
	I1108 09:17:32.534161  318772 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:17:32.534186  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:32.538878  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.538905  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.714163  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:34.715192  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	I1108 09:17:35.715420  310009 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:17:35.715446  310009 pod_ready.go:86] duration metric: took 39.006203091s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.718113  310009 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.721921  310009 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.721942  310009 pod_ready.go:86] duration metric: took 3.80625ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.724454  310009 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.728081  310009 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.728098  310009 pod_ready.go:86] duration metric: took 3.62396ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.730525  310009 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.914488  310009 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.914516  310009 pod_ready.go:86] duration metric: took 183.97019ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:32.424544  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:34.922175  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:33.967021  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:35.967176  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:36.113947  310009 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.517018  310009 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:17:36.517049  310009 pod_ready.go:86] duration metric: took 403.07566ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.714683  310009 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115339  310009 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:17:37.115372  310009 pod_ready.go:86] duration metric: took 400.662562ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115387  310009 pod_ready.go:40] duration metric: took 40.411019881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:37.176895  310009 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:17:37.178443  310009 out.go:203] 
	W1108 09:17:37.180072  310009 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:17:37.184774  310009 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:17:37.186452  310009 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:17:33.034301  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.039725  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:33.039752  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:17:33.534363  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.538638  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1108 09:17:33.539622  318772 api_server.go:141] control plane version: v1.34.1
	I1108 09:17:33.539644  318772 api_server.go:131] duration metric: took 1.005476188s to wait for apiserver health ...
	I1108 09:17:33.539652  318772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:17:33.542649  318772 system_pods.go:59] 8 kube-system pods found
	I1108 09:17:33.542678  318772 system_pods.go:61] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.542686  318772 system_pods.go:61] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.542693  318772 system_pods.go:61] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.542705  318772 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.542713  318772 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.542723  318772 system_pods.go:61] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.542730  318772 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.542734  318772 system_pods.go:61] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.542741  318772 system_pods.go:74] duration metric: took 3.082538ms to wait for pod list to return data ...
	I1108 09:17:33.542750  318772 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:17:33.545077  318772 default_sa.go:45] found service account: "default"
	I1108 09:17:33.545094  318772 default_sa.go:55] duration metric: took 2.339095ms for default service account to be created ...
	I1108 09:17:33.545103  318772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:17:33.547820  318772 system_pods.go:86] 8 kube-system pods found
	I1108 09:17:33.547846  318772 system_pods.go:89] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.547854  318772 system_pods.go:89] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.547860  318772 system_pods.go:89] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.547867  318772 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.547875  318772 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.547879  318772 system_pods.go:89] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.547884  318772 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.547889  318772 system_pods.go:89] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.547898  318772 system_pods.go:126] duration metric: took 2.79107ms to wait for k8s-apps to be running ...
	I1108 09:17:33.547906  318772 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:17:33.547945  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:33.561240  318772 system_svc.go:56] duration metric: took 13.32927ms WaitForService to wait for kubelet
	I1108 09:17:33.561268  318772 kubeadm.go:587] duration metric: took 3.457542806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:33.561299  318772 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:17:33.563775  318772 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:17:33.563796  318772 node_conditions.go:123] node cpu capacity is 8
	I1108 09:17:33.563807  318772 node_conditions.go:105] duration metric: took 2.498943ms to run NodePressure ...
	I1108 09:17:33.563817  318772 start.go:242] waiting for startup goroutines ...
	I1108 09:17:33.563823  318772 start.go:247] waiting for cluster config update ...
	I1108 09:17:33.563833  318772 start.go:256] writing updated cluster config ...
	I1108 09:17:33.564106  318772 ssh_runner.go:195] Run: rm -f paused
	I1108 09:17:33.567850  318772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:33.571308  318772 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:35.577410  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:37.578193  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:36.923004  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:38.923619  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:40.924149  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:37.973264  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:40.469582  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:39.578963  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:42.077248  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:42.467997  313008 pod_ready.go:94] pod "coredns-66bc5c9577-zdb97" is "Ready"
	I1108 09:17:42.468035  313008 pod_ready.go:86] duration metric: took 34.505824056s for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.470522  313008 pod_ready.go:83] waiting for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.474338  313008 pod_ready.go:94] pod "etcd-no-preload-220714" is "Ready"
	I1108 09:17:42.474362  313008 pod_ready.go:86] duration metric: took 3.818729ms for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.476372  313008 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.480064  313008 pod_ready.go:94] pod "kube-apiserver-no-preload-220714" is "Ready"
	I1108 09:17:42.480092  313008 pod_ready.go:86] duration metric: took 3.702017ms for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.481978  313008 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.667986  313008 pod_ready.go:94] pod "kube-controller-manager-no-preload-220714" is "Ready"
	I1108 09:17:42.668016  313008 pod_ready.go:86] duration metric: took 186.016263ms for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.866316  313008 pod_ready.go:83] waiting for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.266611  313008 pod_ready.go:94] pod "kube-proxy-66cm9" is "Ready"
	I1108 09:17:43.266646  313008 pod_ready.go:86] duration metric: took 400.304671ms for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.465603  313008 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866064  313008 pod_ready.go:94] pod "kube-scheduler-no-preload-220714" is "Ready"
	I1108 09:17:43.866090  313008 pod_ready.go:86] duration metric: took 400.463165ms for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866101  313008 pod_ready.go:40] duration metric: took 35.96660519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:43.912507  313008 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:43.914651  313008 out.go:179] * Done! kubectl is now configured to use "no-preload-220714" cluster and "default" namespace by default
	I1108 09:17:43.422936  312299 pod_ready.go:94] pod "coredns-66bc5c9577-cbw4j" is "Ready"
	I1108 09:17:43.422965  312299 pod_ready.go:86] duration metric: took 35.505880955s for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.425909  312299 pod_ready.go:83] waiting for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.431928  312299 pod_ready.go:94] pod "etcd-embed-certs-271910" is "Ready"
	I1108 09:17:43.431954  312299 pod_ready.go:86] duration metric: took 6.020724ms for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.434331  312299 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.438424  312299 pod_ready.go:94] pod "kube-apiserver-embed-certs-271910" is "Ready"
	I1108 09:17:43.438442  312299 pod_ready.go:86] duration metric: took 4.093369ms for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.440478  312299 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.620323  312299 pod_ready.go:94] pod "kube-controller-manager-embed-certs-271910" is "Ready"
	I1108 09:17:43.620365  312299 pod_ready.go:86] duration metric: took 179.862516ms for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.820429  312299 pod_ready.go:83] waiting for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.221050  312299 pod_ready.go:94] pod "kube-proxy-lwbl6" is "Ready"
	I1108 09:17:44.221084  312299 pod_ready.go:86] duration metric: took 400.626058ms for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.421474  312299 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820796  312299 pod_ready.go:94] pod "kube-scheduler-embed-certs-271910" is "Ready"
	I1108 09:17:44.820825  312299 pod_ready.go:86] duration metric: took 399.325955ms for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820836  312299 pod_ready.go:40] duration metric: took 36.908910218s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:44.864186  312299 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:44.865991  312299 out.go:179] * Done! kubectl is now configured to use "embed-certs-271910" cluster and "default" namespace by default
	W1108 09:17:44.577222  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:46.577391  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:17:14 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:14.502761035Z" level=info msg="Started container" PID=1732 containerID=ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper id=761f25ac-7c5a-4746-a01e-4c1889e8c772 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba348a89182f89973383626ae93b4e0cf9381ae86f0d52fa3d51909a1214f08
	Nov 08 09:17:15 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:15.405261034Z" level=info msg="Removing container: 61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d" id=f3a23684-7bf9-434d-a4a3-0a86363f05ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:15 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:15.420693779Z" level=info msg="Removed container 61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=f3a23684-7bf9-434d-a4a3-0a86363f05ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.435422784Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=95fefa2f-cc9b-467d-b09e-0861aed4e816 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.436336946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f86c3b8e-5751-46ea-b614-a8d1aed4adb3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.437346459Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=107d6ede-4b94-494a-a01e-69b6e25ac10c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.437478425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441698255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441887779Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc474c731d8335c9f719db5ef3d64276011cc574476f6d36112f025ee2f6dd15/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441923182Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc474c731d8335c9f719db5ef3d64276011cc574476f6d36112f025ee2f6dd15/merged/etc/group: no such file or directory"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.442203181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.466640405Z" level=info msg="Created container a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5: kube-system/storage-provisioner/storage-provisioner" id=107d6ede-4b94-494a-a01e-69b6e25ac10c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.467342603Z" level=info msg="Starting container: a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5" id=b429592d-2ba9-4dc8-809b-0e48c9292429 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.469145289Z" level=info msg="Started container" PID=1748 containerID=a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5 description=kube-system/storage-provisioner/storage-provisioner id=b429592d-2ba9-4dc8-809b-0e48c9292429 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34ef3c2686eeea72180534dfe3bda9f3bab89357ac6970b5dfdc5291f863192b
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.325047506Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fb525d7-75be-4714-a305-728881aa2274 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.325991923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ab1a83ca-92aa-4b53-b636-9115c37e749c name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.32712879Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=8d3249ca-2900-4a5f-81db-16d33ba883d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.327347692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.336913036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.337673768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.358380025Z" level=info msg="Created container a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=8d3249ca-2900-4a5f-81db-16d33ba883d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.359042345Z" level=info msg="Starting container: a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857" id=18f40cb0-f3c5-498c-a7aa-fedc73462898 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.361137744Z" level=info msg="Started container" PID=1764 containerID=a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper id=18f40cb0-f3c5-498c-a7aa-fedc73462898 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba348a89182f89973383626ae93b4e0cf9381ae86f0d52fa3d51909a1214f08
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.447672732Z" level=info msg="Removing container: ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391" id=54e9d57a-cbd0-4891-b759-1e1c3d181653 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.457593175Z" level=info msg="Removed container ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=54e9d57a-cbd0-4891-b759-1e1c3d181653 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a316eac5d63e2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   4ba348a89182f       dashboard-metrics-scraper-5f989dc9cf-2xgql       kubernetes-dashboard
	a6b3caa95b08e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           25 seconds ago      Running             storage-provisioner         1                   34ef3c2686eee       storage-provisioner                              kube-system
	03cf5adcdb2bd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   41 seconds ago      Running             kubernetes-dashboard        0                   1affe07e514f3       kubernetes-dashboard-8694d4445c-tt95r            kubernetes-dashboard
	b6cde499f752e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           56 seconds ago      Running             coredns                     0                   3af4ae025e1b7       coredns-5dd5756b68-88pvx                         kube-system
	8166883d857eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           56 seconds ago      Running             busybox                     1                   da61c4391f92f       busybox                                          default
	7328412edf383       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           56 seconds ago      Running             kube-proxy                  0                   5d5cf6630cb55       kube-proxy-v4l6x                                 kube-system
	40c5750e71e71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           56 seconds ago      Exited              storage-provisioner         0                   34ef3c2686eee       storage-provisioner                              kube-system
	7fd46eea76685       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           56 seconds ago      Running             kindnet-cni                 0                   069345d16e97a       kindnet-6d922                                    kube-system
	05f0737bca264       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           59 seconds ago      Running             etcd                        0                   9a74d77a2add9       etcd-old-k8s-version-339286                      kube-system
	b110ac2f6aa3a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           59 seconds ago      Running             kube-scheduler              0                   11b8347c7b0b9       kube-scheduler-old-k8s-version-339286            kube-system
	98d55dc91e4cc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           59 seconds ago      Running             kube-apiserver              0                   2309dbe9620bb       kube-apiserver-old-k8s-version-339286            kube-system
	5f7fc9875b5fc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           59 seconds ago      Running             kube-controller-manager     0                   839b6a6c9bac1       kube-controller-manager-old-k8s-version-339286   kube-system
	
	
	==> coredns [b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35905 - 38774 "HINFO IN 5868626517375141879.3157310105645428470. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.416973272s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-339286
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-339286
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-339286
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_15_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:15:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-339286
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:16:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-339286
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                67b4f6ec-c7a7-47b7-a68b-0baf0383287f
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         95s
	  kube-system                 coredns-5dd5756b68-88pvx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     111s
	  kube-system                 etcd-old-k8s-version-339286                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m4s
	  kube-system                 kindnet-6d922                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      112s
	  kube-system                 kube-apiserver-old-k8s-version-339286             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-controller-manager-old-k8s-version-339286    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 kube-proxy-v4l6x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-scheduler-old-k8s-version-339286             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m4s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         111s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2xgql        0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-tt95r             0 (0%)        0 (0%)      0 (0%)           0 (0%)         45s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 110s                   kube-proxy       
	  Normal  Starting                 56s                    kube-proxy       
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m10s (x8 over 2m10s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m4s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m4s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m4s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m4s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           112s                   node-controller  Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller
	  Normal  NodeReady                99s                    kubelet          Node old-k8s-version-339286 status is now: NodeReady
	  Normal  Starting                 60s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x9 over 60s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x7 over 60s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           45s                    node-controller  Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531] <==
	{"level":"info","ts":"2025-11-08T09:16:52.879062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-08T09:16:52.879195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-08T09:16:52.879362Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:16:52.879377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:16:52.879404Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:16:52.879473Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:16:52.883491Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:16:52.883724Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:16:52.883752Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:16:52.88389Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-08T09:16:52.883935Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-08T09:16:54.069737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.070926Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-339286 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:16:54.070933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:16:54.070958Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:16:54.071186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:16:54.071217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:16:54.072225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-08T09:16:54.072225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:17:52 up  1:00,  0 user,  load average: 4.26, 3.97, 2.60
	Linux old-k8s-version-339286 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b] <==
	I1108 09:16:55.917297       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:55.917588       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:16:55.917762       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:55.917780       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:55.917799       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:56.213156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:56.213182       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:56.213195       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:56.214129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:56.787356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:56.787401       1 metrics.go:72] Registering metrics
	I1108 09:16:56.787472       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:06.213455       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:06.213535       1 main.go:301] handling current node
	I1108 09:17:16.213394       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:16.213482       1 main.go:301] handling current node
	I1108 09:17:26.213866       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:26.213912       1 main.go:301] handling current node
	I1108 09:17:36.213345       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:36.213403       1 main.go:301] handling current node
	I1108 09:17:46.213313       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:46.213344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9] <==
	I1108 09:16:55.007613       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1108 09:16:55.043749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:16:55.053843       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 09:16:55.109175       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 09:16:55.109317       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:16:55.109342       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:16:55.109309       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 09:16:55.109368       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 09:16:55.109258       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 09:16:55.109379       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:16:55.109390       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:16:55.109397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:16:55.109405       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:16:55.109672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:16:55.991242       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:16:56.007135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:16:56.029338       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:16:56.054212       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:16:56.064108       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:16:56.074490       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:16:56.126986       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.255.85"}
	I1108 09:16:56.143939       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.2.160"}
	I1108 09:17:07.319055       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 09:17:07.367223       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 09:17:07.529973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653] <==
	I1108 09:17:07.385885       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-tt95r"
	I1108 09:17:07.392787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.222099ms"
	I1108 09:17:07.396776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.196831ms"
	I1108 09:17:07.399767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.927264ms"
	I1108 09:17:07.399864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.931µs"
	I1108 09:17:07.402270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.440447ms"
	I1108 09:17:07.402406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.96µs"
	I1108 09:17:07.409179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.703µs"
	I1108 09:17:07.417108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.211µs"
	I1108 09:17:07.470396       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:17:07.517917       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 09:17:07.533639       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 09:17:07.549390       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:17:07.886799       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:17:07.900391       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:17:07.900457       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:17:11.423994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.561707ms"
	I1108 09:17:11.424963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="863.775µs"
	I1108 09:17:14.414119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.784916ms"
	I1108 09:17:15.422883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.156µs"
	I1108 09:17:16.471936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.136µs"
	I1108 09:17:29.458911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.732µs"
	I1108 09:17:35.361571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.919924ms"
	I1108 09:17:35.361693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.518µs"
	I1108 09:17:37.711240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.937µs"
	
	
	==> kube-proxy [7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30] <==
	I1108 09:16:55.790498       1 server_others.go:69] "Using iptables proxy"
	I1108 09:16:55.801546       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1108 09:16:55.819900       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:55.822220       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:16:55.822251       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:16:55.822259       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:16:55.822324       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:16:55.822571       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:16:55.822590       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:55.823231       1 config.go:188] "Starting service config controller"
	I1108 09:16:55.823269       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:16:55.823304       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:16:55.823317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:16:55.823395       1 config.go:315] "Starting node config controller"
	I1108 09:16:55.823412       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:16:55.924444       1 shared_informer.go:318] Caches are synced for node config
	I1108 09:16:55.924478       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:16:55.924468       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d] <==
	I1108 09:16:53.226438       1 serving.go:348] Generated self-signed cert in-memory
	I1108 09:16:55.063991       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 09:16:55.064019       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:55.067610       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 09:16:55.067630       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:16:55.067658       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 09:16:55.067633       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 09:16:55.067627       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:16:55.067782       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 09:16:55.068496       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 09:16:55.068741       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 09:16:55.167979       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 09:16:55.168012       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 09:16:55.168025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:17:07 old-k8s-version-339286 kubelet[725]: I1108 09:17:07.508795     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585gq\" (UniqueName: \"kubernetes.io/projected/598b85f9-cf83-45bf-ac00-667cae766168-kube-api-access-585gq\") pod \"dashboard-metrics-scraper-5f989dc9cf-2xgql\" (UID: \"598b85f9-cf83-45bf-ac00-667cae766168\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql"
	Nov 08 09:17:07 old-k8s-version-339286 kubelet[725]: I1108 09:17:07.508851     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb245aae-48cc-4ddb-bd6a-375932d5804e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-tt95r\" (UID: \"cb245aae-48cc-4ddb-bd6a-375932d5804e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tt95r"
	Nov 08 09:17:11 old-k8s-version-339286 kubelet[725]: I1108 09:17:11.411154     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tt95r" podStartSLOduration=1.5051657920000001 podCreationTimestamp="2025-11-08 09:17:07 +0000 UTC" firstStartedPulling="2025-11-08 09:17:07.729024647 +0000 UTC m=+15.492957837" lastFinishedPulling="2025-11-08 09:17:10.634926191 +0000 UTC m=+18.398859389" observedRunningTime="2025-11-08 09:17:11.410578171 +0000 UTC m=+19.174511384" watchObservedRunningTime="2025-11-08 09:17:11.411067344 +0000 UTC m=+19.175000555"
	Nov 08 09:17:14 old-k8s-version-339286 kubelet[725]: I1108 09:17:14.398097     725 scope.go:117] "RemoveContainer" containerID="61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: I1108 09:17:15.403659     725 scope.go:117] "RemoveContainer" containerID="61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: I1108 09:17:15.403727     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: E1108 09:17:15.404713     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:16 old-k8s-version-339286 kubelet[725]: I1108 09:17:16.407252     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:16 old-k8s-version-339286 kubelet[725]: E1108 09:17:16.407574     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:17 old-k8s-version-339286 kubelet[725]: I1108 09:17:17.698592     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:17 old-k8s-version-339286 kubelet[725]: E1108 09:17:17.698993     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:26 old-k8s-version-339286 kubelet[725]: I1108 09:17:26.435015     725 scope.go:117] "RemoveContainer" containerID="40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.324352     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.446372     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.446595     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: E1108 09:17:29.447069     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:37 old-k8s-version-339286 kubelet[725]: I1108 09:17:37.699349     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:37 old-k8s-version-339286 kubelet[725]: E1108 09:17:37.699739     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: I1108 09:17:49.323711     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: E1108 09:17:49.323981     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: I1108 09:17:49.353205     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: kubelet.service: Consumed 1.643s CPU time.
	
	
	==> kubernetes-dashboard [03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd] <==
	2025/11/08 09:17:10 Starting overwatch
	2025/11/08 09:17:10 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:10 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:10 Using secret token for csrf signing
	2025/11/08 09:17:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:10 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 09:17:10 Generating JWE encryption key
	2025/11/08 09:17:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:11 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:11 Creating in-cluster Sidecar client
	2025/11/08 09:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:11 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125] <==
	I1108 09:16:55.738118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:25.740806       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5] <==
	I1108 09:17:26.480705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:26.488466       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:26.488518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:17:43.885587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:43.885669       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c63ab52-f89e-4357-9f41-9364b79d256c", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b became leader
	I1108 09:17:43.885751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b!
	I1108 09:17:43.986633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-339286 -n old-k8s-version-339286: exit status 2 (333.599498ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-339286 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect old-k8s-version-339286
helpers_test.go:243: (dbg) docker inspect old-k8s-version-339286:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	        "Created": "2025-11-08T09:15:31.664105217Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 310330,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:46.254380446Z",
	            "FinishedAt": "2025-11-08T09:16:45.343821845Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hostname",
	        "HostsPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/hosts",
	        "LogPath": "/var/lib/docker/containers/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb/ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb-json.log",
	        "Name": "/old-k8s-version-339286",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-339286:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "old-k8s-version-339286",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "ce364047d86bf748b4da2ab33c006daed1d6113ac8ba742a4864c740f708c3bb",
	                "LowerDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb9ffd60a30b70fe383c35896e67c991203bddb1d27c6e6321f5f6874973279/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-339286",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-339286/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-339286",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-339286",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "033e7cde3fb4e483fe2e2664daeb4785bb6efc694044030eaec42029ee59f8e2",
	            "SandboxKey": "/var/run/docker/netns/033e7cde3fb4",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33109"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33110"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33113"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33111"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33112"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-339286": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "d2:4b:28:3a:a3:38",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "111659f5c16fa8de648fbd4b0737819906b512d8974c73538f9c6cac58753ac3",
	                    "EndpointID": "e243ebb2bc7bb3196b05953fdbec26d90759d630f6a8b2fca912172f35601c89",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "old-k8s-version-339286",
	                        "ce364047d86b"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286: exit status 2 (330.842169ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25
E1108 09:17:54.234485    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-339286 logs -n 25: (1.100882777s)
helpers_test.go:260: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ ssh     │ -p bridge-732849 sudo systemctl status crio --all --full --no-pager                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo systemctl cat crio --no-pager                                                                                                                                                                                           │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo find /etc/crio -type f -exec sh -c 'echo {}; cat {}' \;                                                                                                                                                                 │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ ssh     │ -p bridge-732849 sudo crio config                                                                                                                                                                                                             │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p bridge-732849                                                                                                                                                                                                                              │ bridge-732849                │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                                                                                               │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:23.014181  318772 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:23.014490  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014501  318772 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:23.014506  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014688  318772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:23.015160  318772 out.go:368] Setting JSON to false
	I1108 09:17:23.016473  318772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3594,"bootTime":1762589849,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:23.016562  318772 start.go:143] virtualization: kvm guest
	I1108 09:17:23.018650  318772 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:23.020167  318772 notify.go:221] Checking for updates...
	I1108 09:17:23.020234  318772 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:23.021653  318772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:23.023193  318772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:23.024687  318772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:23.026129  318772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:23.027502  318772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:23.029342  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:23.029838  318772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:23.055123  318772 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:23.055259  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.110228  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.100330014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.110439  318772 docker.go:319] overlay module found
	I1108 09:17:23.112516  318772 out.go:179] * Using the docker driver based on existing profile
	I1108 09:17:23.113842  318772 start.go:309] selected driver: docker
	I1108 09:17:23.113858  318772 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.113935  318772 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:23.114523  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.170233  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.160701234 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.170557  318772 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:23.170587  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:23.170630  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:23.170681  318772 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.173037  318772 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:17:23.174652  318772 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:23.176085  318772 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:23.177478  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:23.177520  318772 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:23.177527  318772 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:23.177553  318772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:23.177617  318772 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:23.177632  318772 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:23.177725  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.200331  318772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:23.200356  318772 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:23.200379  318772 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:23.200409  318772 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:23.200478  318772 start.go:364] duration metric: took 44.108µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:17:23.200502  318772 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:17:23.200508  318772 fix.go:54] fixHost starting: 
	I1108 09:17:23.200797  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.222078  318772 fix.go:112] recreateIfNeeded on default-k8s-diff-port-677902: state=Stopped err=<nil>
	W1108 09:17:23.222126  318772 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 09:17:23.215019  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:25.215267  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:21.423354  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:23.921916  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:25.922381  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:22.022599  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:24.467748  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:26.467970  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:23.223920  318772 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-677902" ...
	I1108 09:17:23.224026  318772 cli_runner.go:164] Run: docker start default-k8s-diff-port-677902
	I1108 09:17:23.517410  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.541523  318772 kic.go:430] container "default-k8s-diff-port-677902" state is running.
	I1108 09:17:23.542096  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:23.566822  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.567040  318772 machine.go:94] provisionDockerMachine start ...
	I1108 09:17:23.567111  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:23.587476  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:23.587789  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:23.587807  318772 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:17:23.588482  318772 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43940->127.0.0.1:33124: read: connection reset by peer
	I1108 09:17:26.720488  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.720521  318772 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:17:26.720581  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.739702  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.739910  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.739923  318772 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:17:26.879756  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.879827  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.900874  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.901124  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.901145  318772 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:17:27.030475  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:17:27.030504  318772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:17:27.030544  318772 ubuntu.go:190] setting up certificates
	I1108 09:17:27.030558  318772 provision.go:84] configureAuth start
	I1108 09:17:27.030617  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.049655  318772 provision.go:143] copyHostCerts
	I1108 09:17:27.049718  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:17:27.049734  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:17:27.049821  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:17:27.049958  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:17:27.049978  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:17:27.050022  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:17:27.050114  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:17:27.050123  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:17:27.050149  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:17:27.050225  318772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:17:27.218430  318772 provision.go:177] copyRemoteCerts
	I1108 09:17:27.218485  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:17:27.218517  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.238620  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.334066  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:17:27.353472  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:17:27.371621  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:17:27.389736  318772 provision.go:87] duration metric: took 359.161729ms to configureAuth
	I1108 09:17:27.389766  318772 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:17:27.389969  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:27.390099  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.408638  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:27.408840  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:27.408855  318772 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:17:27.700508  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:17:27.700535  318772 machine.go:97] duration metric: took 4.133482649s to provisionDockerMachine
	I1108 09:17:27.700549  318772 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:17:27.700562  318772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:17:27.700637  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:17:27.700708  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.722016  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.818358  318772 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:17:27.822257  318772 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:17:27.822295  318772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:17:27.822309  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:17:27.822368  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:17:27.822472  318772 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:17:27.822590  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:17:27.830681  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:27.849577  318772 start.go:296] duration metric: took 149.013814ms for postStartSetup
	I1108 09:17:27.849653  318772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:17:27.849714  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.869059  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.960711  318772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:17:27.965862  318772 fix.go:56] duration metric: took 4.765347999s for fixHost
	I1108 09:17:27.965889  318772 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 4.765396741s
	I1108 09:17:27.965955  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.984988  318772 ssh_runner.go:195] Run: cat /version.json
	I1108 09:17:27.985031  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.985093  318772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:17:27.985177  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:28.004610  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.004907  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.149001  318772 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:28.155580  318772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:17:28.192252  318772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:17:28.197116  318772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:17:28.197175  318772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:17:28.205203  318772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:17:28.205224  318772 start.go:496] detecting cgroup driver to use...
	I1108 09:17:28.205255  318772 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:17:28.205303  318772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:17:28.220826  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:17:28.234319  318772 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:17:28.234394  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:17:28.249292  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:17:28.262217  318772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:17:28.343998  318772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:17:28.425777  318772 docker.go:234] disabling docker service ...
	I1108 09:17:28.425843  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:17:28.440815  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:17:28.455138  318772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:17:28.537601  318772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:17:28.622788  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:17:28.635585  318772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:17:28.649621  318772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:17:28.649672  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.659171  318772 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:17:28.659244  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.668583  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.677393  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.686251  318772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:17:28.694982  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.704557  318772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.713519  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.723588  318772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:17:28.731786  318772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:17:28.739658  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:28.823880  318772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:17:28.925939  318772 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:17:28.926009  318772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:17:28.930260  318772 start.go:564] Will wait 60s for crictl version
	I1108 09:17:28.930332  318772 ssh_runner.go:195] Run: which crictl
	I1108 09:17:28.934146  318772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:17:28.959101  318772 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:17:28.959184  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:28.987183  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:29.017768  318772 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:17:29.019019  318772 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:29.036798  318772 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:17:29.041036  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.051759  318772 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:17:29.051887  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:29.051933  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.084447  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.084468  318772 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:17:29.084512  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.110976  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.111002  318772 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:17:29.111018  318772 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:17:29.111172  318772 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:17:29.111249  318772 ssh_runner.go:195] Run: crio config
	I1108 09:17:29.155244  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:29.155266  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:29.155307  318772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:17:29.155338  318772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:17:29.155495  318772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:17:29.155551  318772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:17:29.163669  318772 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:17:29.163736  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:17:29.171252  318772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:17:29.184573  318772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:17:29.196971  318772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:17:29.209695  318772 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:17:29.213735  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.224550  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:29.306727  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:29.333961  318772 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:17:29.333990  318772 certs.go:195] generating shared ca certs ...
	I1108 09:17:29.334022  318772 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:29.334192  318772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:17:29.334258  318772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:17:29.334275  318772 certs.go:257] generating profile certs ...
	I1108 09:17:29.334443  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:17:29.334517  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:17:29.334567  318772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:17:29.334703  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:17:29.334750  318772 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:17:29.334763  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:17:29.334800  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:17:29.334836  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:17:29.334868  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:17:29.334923  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:29.335755  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:17:29.358546  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:17:29.382353  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:17:29.403720  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:17:29.426530  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:17:29.450442  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:17:29.471845  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:17:29.489173  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:17:29.506582  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:17:29.524071  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:17:29.543268  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:17:29.561916  318772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:17:29.574785  318772 ssh_runner.go:195] Run: openssl version
	I1108 09:17:29.581198  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:17:29.590123  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593890  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593942  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.629344  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:17:29.637798  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:17:29.646788  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650810  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650886  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.686144  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:17:29.694870  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:17:29.704343  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708244  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708301  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.747154  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:17:29.756245  318772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:17:29.760208  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:17:29.798830  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:17:29.835366  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:17:29.881735  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:17:29.926935  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:17:29.975380  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:17:30.025917  318772 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:30.026024  318772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:30.026120  318772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:30.057401  318772 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:17:30.057427  318772 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:17:30.057433  318772 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:17:30.057439  318772 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:17:30.057447  318772 cri.go:89] found id: ""
	I1108 09:17:30.057485  318772 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:17:30.069676  318772 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:30.069736  318772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:17:30.078414  318772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:17:30.078433  318772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:17:30.078477  318772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:17:30.086093  318772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:17:30.087564  318772 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-677902" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.088577  318772 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5860/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-677902" cluster setting kubeconfig missing "default-k8s-diff-port-677902" context setting]
	I1108 09:17:30.089991  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.092252  318772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:17:30.100764  318772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:17:30.100804  318772 kubeadm.go:602] duration metric: took 22.36077ms to restartPrimaryControlPlane
	I1108 09:17:30.100814  318772 kubeadm.go:403] duration metric: took 74.907828ms to StartCluster
	I1108 09:17:30.100831  318772 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.100935  318772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.103426  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.103692  318772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:30.103761  318772 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:17:30.103862  318772 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.103881  318772 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.103890  318772 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:17:30.103917  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.103945  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:30.103995  318772 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104069  318772 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104010  318772 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-677902"
	I1108 09:17:30.104098  318772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-677902"
	W1108 09:17:30.104104  318772 addons.go:248] addon dashboard should already be in state true
	I1108 09:17:30.104134  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.104426  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104485  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104734  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.126565  318772 out.go:179] * Verifying Kubernetes components...
	I1108 09:17:30.128137  318772 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.128160  318772 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:17:30.128186  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.128648  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.129880  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:30.131252  318772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:17:30.131276  318772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:17:30.134171  318772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.134193  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:17:30.134249  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.134885  318772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1108 09:17:27.715578  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:30.215915  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:27.923031  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:29.925443  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:28.968151  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:31.470050  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:30.138683  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:17:30.138707  318772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:17:30.138768  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.155528  318772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.155552  318772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:17:30.155610  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.159215  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.161996  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.184265  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.283069  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:17:30.283103  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:17:30.283635  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.294542  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:30.295994  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.301109  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:17:30.301130  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:17:30.321171  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:17:30.321197  318772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:17:30.339306  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:17:30.339332  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:17:30.353887  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:17:30.353939  318772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:17:30.367921  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:17:30.367943  318772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:17:30.380743  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:17:30.380768  318772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:17:30.393662  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:17:30.393688  318772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:17:30.407461  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:30.407490  318772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:17:30.422749  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:32.507801  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.224127884s)
	I1108 09:17:32.507827  318772 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.213247429s)
	I1108 09:17:32.507867  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.211842649s)
	I1108 09:17:32.507875  318772 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.508003  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.085193451s)
	I1108 09:17:32.510165  318772 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-677902 addons enable metrics-server
	
	I1108 09:17:32.518886  318772 node_ready.go:49] node "default-k8s-diff-port-677902" is "Ready"
	I1108 09:17:32.518917  318772 node_ready.go:38] duration metric: took 11.026405ms for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.518932  318772 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:17:32.518979  318772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:17:32.524408  318772 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:17:32.525554  318772 addons.go:515] duration metric: took 2.421802346s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:17:32.534116  318772 api_server.go:72] duration metric: took 2.430387553s to wait for apiserver process to appear ...
	I1108 09:17:32.534161  318772 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:17:32.534186  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:32.538878  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.538905  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.714163  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:34.715192  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	I1108 09:17:35.715420  310009 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:17:35.715446  310009 pod_ready.go:86] duration metric: took 39.006203091s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.718113  310009 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.721921  310009 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.721942  310009 pod_ready.go:86] duration metric: took 3.80625ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.724454  310009 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.728081  310009 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.728098  310009 pod_ready.go:86] duration metric: took 3.62396ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.730525  310009 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.914488  310009 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.914516  310009 pod_ready.go:86] duration metric: took 183.97019ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:32.424544  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:34.922175  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:33.967021  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:35.967176  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:36.113947  310009 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.517018  310009 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:17:36.517049  310009 pod_ready.go:86] duration metric: took 403.07566ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.714683  310009 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115339  310009 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:17:37.115372  310009 pod_ready.go:86] duration metric: took 400.662562ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115387  310009 pod_ready.go:40] duration metric: took 40.411019881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:37.176895  310009 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:17:37.178443  310009 out.go:203] 
	W1108 09:17:37.180072  310009 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:17:37.184774  310009 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:17:37.186452  310009 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:17:33.034301  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.039725  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:33.039752  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:17:33.534363  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.538638  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1108 09:17:33.539622  318772 api_server.go:141] control plane version: v1.34.1
	I1108 09:17:33.539644  318772 api_server.go:131] duration metric: took 1.005476188s to wait for apiserver health ...
	I1108 09:17:33.539652  318772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:17:33.542649  318772 system_pods.go:59] 8 kube-system pods found
	I1108 09:17:33.542678  318772 system_pods.go:61] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.542686  318772 system_pods.go:61] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.542693  318772 system_pods.go:61] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.542705  318772 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.542713  318772 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.542723  318772 system_pods.go:61] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.542730  318772 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.542734  318772 system_pods.go:61] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.542741  318772 system_pods.go:74] duration metric: took 3.082538ms to wait for pod list to return data ...
	I1108 09:17:33.542750  318772 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:17:33.545077  318772 default_sa.go:45] found service account: "default"
	I1108 09:17:33.545094  318772 default_sa.go:55] duration metric: took 2.339095ms for default service account to be created ...
	I1108 09:17:33.545103  318772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:17:33.547820  318772 system_pods.go:86] 8 kube-system pods found
	I1108 09:17:33.547846  318772 system_pods.go:89] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.547854  318772 system_pods.go:89] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.547860  318772 system_pods.go:89] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.547867  318772 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.547875  318772 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.547879  318772 system_pods.go:89] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.547884  318772 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.547889  318772 system_pods.go:89] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.547898  318772 system_pods.go:126] duration metric: took 2.79107ms to wait for k8s-apps to be running ...
	I1108 09:17:33.547906  318772 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:17:33.547945  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:33.561240  318772 system_svc.go:56] duration metric: took 13.32927ms WaitForService to wait for kubelet
	I1108 09:17:33.561268  318772 kubeadm.go:587] duration metric: took 3.457542806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:33.561299  318772 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:17:33.563775  318772 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:17:33.563796  318772 node_conditions.go:123] node cpu capacity is 8
	I1108 09:17:33.563807  318772 node_conditions.go:105] duration metric: took 2.498943ms to run NodePressure ...
	I1108 09:17:33.563817  318772 start.go:242] waiting for startup goroutines ...
	I1108 09:17:33.563823  318772 start.go:247] waiting for cluster config update ...
	I1108 09:17:33.563833  318772 start.go:256] writing updated cluster config ...
	I1108 09:17:33.564106  318772 ssh_runner.go:195] Run: rm -f paused
	I1108 09:17:33.567850  318772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:33.571308  318772 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:35.577410  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:37.578193  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:36.923004  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:38.923619  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:40.924149  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:37.973264  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:40.469582  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:39.578963  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:42.077248  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:42.467997  313008 pod_ready.go:94] pod "coredns-66bc5c9577-zdb97" is "Ready"
	I1108 09:17:42.468035  313008 pod_ready.go:86] duration metric: took 34.505824056s for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.470522  313008 pod_ready.go:83] waiting for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.474338  313008 pod_ready.go:94] pod "etcd-no-preload-220714" is "Ready"
	I1108 09:17:42.474362  313008 pod_ready.go:86] duration metric: took 3.818729ms for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.476372  313008 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.480064  313008 pod_ready.go:94] pod "kube-apiserver-no-preload-220714" is "Ready"
	I1108 09:17:42.480092  313008 pod_ready.go:86] duration metric: took 3.702017ms for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.481978  313008 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.667986  313008 pod_ready.go:94] pod "kube-controller-manager-no-preload-220714" is "Ready"
	I1108 09:17:42.668016  313008 pod_ready.go:86] duration metric: took 186.016263ms for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.866316  313008 pod_ready.go:83] waiting for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.266611  313008 pod_ready.go:94] pod "kube-proxy-66cm9" is "Ready"
	I1108 09:17:43.266646  313008 pod_ready.go:86] duration metric: took 400.304671ms for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.465603  313008 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866064  313008 pod_ready.go:94] pod "kube-scheduler-no-preload-220714" is "Ready"
	I1108 09:17:43.866090  313008 pod_ready.go:86] duration metric: took 400.463165ms for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866101  313008 pod_ready.go:40] duration metric: took 35.96660519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:43.912507  313008 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:43.914651  313008 out.go:179] * Done! kubectl is now configured to use "no-preload-220714" cluster and "default" namespace by default
	I1108 09:17:43.422936  312299 pod_ready.go:94] pod "coredns-66bc5c9577-cbw4j" is "Ready"
	I1108 09:17:43.422965  312299 pod_ready.go:86] duration metric: took 35.505880955s for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.425909  312299 pod_ready.go:83] waiting for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.431928  312299 pod_ready.go:94] pod "etcd-embed-certs-271910" is "Ready"
	I1108 09:17:43.431954  312299 pod_ready.go:86] duration metric: took 6.020724ms for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.434331  312299 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.438424  312299 pod_ready.go:94] pod "kube-apiserver-embed-certs-271910" is "Ready"
	I1108 09:17:43.438442  312299 pod_ready.go:86] duration metric: took 4.093369ms for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.440478  312299 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.620323  312299 pod_ready.go:94] pod "kube-controller-manager-embed-certs-271910" is "Ready"
	I1108 09:17:43.620365  312299 pod_ready.go:86] duration metric: took 179.862516ms for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.820429  312299 pod_ready.go:83] waiting for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.221050  312299 pod_ready.go:94] pod "kube-proxy-lwbl6" is "Ready"
	I1108 09:17:44.221084  312299 pod_ready.go:86] duration metric: took 400.626058ms for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.421474  312299 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820796  312299 pod_ready.go:94] pod "kube-scheduler-embed-certs-271910" is "Ready"
	I1108 09:17:44.820825  312299 pod_ready.go:86] duration metric: took 399.325955ms for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820836  312299 pod_ready.go:40] duration metric: took 36.908910218s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:44.864186  312299 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:44.865991  312299 out.go:179] * Done! kubectl is now configured to use "embed-certs-271910" cluster and "default" namespace by default
	W1108 09:17:44.577222  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:46.577391  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:48.578983  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:51.076640  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:17:14 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:14.502761035Z" level=info msg="Started container" PID=1732 containerID=ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper id=761f25ac-7c5a-4746-a01e-4c1889e8c772 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba348a89182f89973383626ae93b4e0cf9381ae86f0d52fa3d51909a1214f08
	Nov 08 09:17:15 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:15.405261034Z" level=info msg="Removing container: 61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d" id=f3a23684-7bf9-434d-a4a3-0a86363f05ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:15 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:15.420693779Z" level=info msg="Removed container 61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=f3a23684-7bf9-434d-a4a3-0a86363f05ec name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.435422784Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=95fefa2f-cc9b-467d-b09e-0861aed4e816 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.436336946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=f86c3b8e-5751-46ea-b614-a8d1aed4adb3 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.437346459Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=107d6ede-4b94-494a-a01e-69b6e25ac10c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.437478425Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441698255Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441887779Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/fc474c731d8335c9f719db5ef3d64276011cc574476f6d36112f025ee2f6dd15/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.441923182Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/fc474c731d8335c9f719db5ef3d64276011cc574476f6d36112f025ee2f6dd15/merged/etc/group: no such file or directory"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.442203181Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.466640405Z" level=info msg="Created container a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5: kube-system/storage-provisioner/storage-provisioner" id=107d6ede-4b94-494a-a01e-69b6e25ac10c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.467342603Z" level=info msg="Starting container: a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5" id=b429592d-2ba9-4dc8-809b-0e48c9292429 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:26 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:26.469145289Z" level=info msg="Started container" PID=1748 containerID=a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5 description=kube-system/storage-provisioner/storage-provisioner id=b429592d-2ba9-4dc8-809b-0e48c9292429 name=/runtime.v1.RuntimeService/StartContainer sandboxID=34ef3c2686eeea72180534dfe3bda9f3bab89357ac6970b5dfdc5291f863192b
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.325047506Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=8fb525d7-75be-4714-a305-728881aa2274 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.325991923Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ab1a83ca-92aa-4b53-b636-9115c37e749c name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.32712879Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=8d3249ca-2900-4a5f-81db-16d33ba883d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.327347692Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.336913036Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.337673768Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.358380025Z" level=info msg="Created container a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=8d3249ca-2900-4a5f-81db-16d33ba883d1 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.359042345Z" level=info msg="Starting container: a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857" id=18f40cb0-f3c5-498c-a7aa-fedc73462898 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.361137744Z" level=info msg="Started container" PID=1764 containerID=a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857 description=kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper id=18f40cb0-f3c5-498c-a7aa-fedc73462898 name=/runtime.v1.RuntimeService/StartContainer sandboxID=4ba348a89182f89973383626ae93b4e0cf9381ae86f0d52fa3d51909a1214f08
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.447672732Z" level=info msg="Removing container: ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391" id=54e9d57a-cbd0-4891-b759-1e1c3d181653 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:29 old-k8s-version-339286 crio[564]: time="2025-11-08T09:17:29.457593175Z" level=info msg="Removed container ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391: kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql/dashboard-metrics-scraper" id=54e9d57a-cbd0-4891-b759-1e1c3d181653 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                              NAMESPACE
	a316eac5d63e2       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           24 seconds ago       Exited              dashboard-metrics-scraper   2                   4ba348a89182f       dashboard-metrics-scraper-5f989dc9cf-2xgql       kubernetes-dashboard
	a6b3caa95b08e       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           27 seconds ago       Running             storage-provisioner         1                   34ef3c2686eee       storage-provisioner                              kube-system
	03cf5adcdb2bd       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   43 seconds ago       Running             kubernetes-dashboard        0                   1affe07e514f3       kubernetes-dashboard-8694d4445c-tt95r            kubernetes-dashboard
	b6cde499f752e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                           58 seconds ago       Running             coredns                     0                   3af4ae025e1b7       coredns-5dd5756b68-88pvx                         kube-system
	8166883d857eb       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           58 seconds ago       Running             busybox                     1                   da61c4391f92f       busybox                                          default
	7328412edf383       ea1030da44aa18666a7bf15fddd2a38c3143c3277159cb8bdd95f45c8ce62d7a                                           58 seconds ago       Running             kube-proxy                  0                   5d5cf6630cb55       kube-proxy-v4l6x                                 kube-system
	40c5750e71e71       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           58 seconds ago       Exited              storage-provisioner         0                   34ef3c2686eee       storage-provisioner                              kube-system
	7fd46eea76685       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           58 seconds ago       Running             kindnet-cni                 0                   069345d16e97a       kindnet-6d922                                    kube-system
	05f0737bca264       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                           About a minute ago   Running             etcd                        0                   9a74d77a2add9       etcd-old-k8s-version-339286                      kube-system
	b110ac2f6aa3a       f6f496300a2ae7a6727ccf3080d66d2fd22b6cfc271df5351c976c23a28bb157                                           About a minute ago   Running             kube-scheduler              0                   11b8347c7b0b9       kube-scheduler-old-k8s-version-339286            kube-system
	98d55dc91e4cc       bb5e0dde9054c02d6badee88547be7e7bb7b7b818d277c8a61b4b29484bbff95                                           About a minute ago   Running             kube-apiserver              0                   2309dbe9620bb       kube-apiserver-old-k8s-version-339286            kube-system
	5f7fc9875b5fc       4be79c38a4bab6e1252a35697500e8a0d9c5c7c771d9fcc1935c9a7f6cdf4c62                                           About a minute ago   Running             kube-controller-manager     0                   839b6a6c9bac1       kube-controller-manager-old-k8s-version-339286   kube-system
	
	
	==> coredns [b6cde499f752ef145be3de31b57fb2d4179e3c94f0b0c1122da9b0663243c16c] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 25cf5af2951e282c4b0e961a02fb5d3e57c974501832fee92eec17b5135b9ec9d9e87d2ac94e6d117a5ed3dd54e8800aa7b4479706eb54497145ccdb80397d1b
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:35905 - 38774 "HINFO IN 5868626517375141879.3157310105645428470. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.416973272s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               old-k8s-version-339286
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-339286
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=old-k8s-version-339286
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_15_51_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:15:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  old-k8s-version-339286
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:46 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:15:43 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:25 +0000   Sat, 08 Nov 2025 09:16:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    old-k8s-version-339286
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                67b4f6ec-c7a7-47b7-a68b-0baf0383287f
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.28.0
	  Kube-Proxy Version:         v1.28.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                              ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         97s
	  kube-system                 coredns-5dd5756b68-88pvx                          100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     113s
	  kube-system                 etcd-old-k8s-version-339286                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         2m6s
	  kube-system                 kindnet-6d922                                     100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      114s
	  kube-system                 kube-apiserver-old-k8s-version-339286             250m (3%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-controller-manager-old-k8s-version-339286    200m (2%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 kube-proxy-v4l6x                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-scheduler-old-k8s-version-339286             100m (1%)     0 (0%)      0 (0%)           0 (0%)         2m6s
	  kube-system                 storage-provisioner                               0 (0%)        0 (0%)      0 (0%)           0 (0%)         113s
	  kubernetes-dashboard        dashboard-metrics-scraper-5f989dc9cf-2xgql        0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	  kubernetes-dashboard        kubernetes-dashboard-8694d4445c-tt95r             0 (0%)        0 (0%)      0 (0%)           0 (0%)         47s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 112s                   kube-proxy       
	  Normal  Starting                 58s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x8 over 2m12s)  kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    2m6s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  2m6s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m6s                   kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m6s                   kubelet          Starting kubelet.
	  Normal  RegisteredNode           114s                   node-controller  Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller
	  Normal  NodeReady                101s                   kubelet          Node old-k8s-version-339286 status is now: NodeReady
	  Normal  Starting                 62s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  62s (x9 over 62s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    62s (x8 over 62s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     62s (x7 over 62s)      kubelet          Node old-k8s-version-339286 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           47s                    node-controller  Node old-k8s-version-339286 event: Registered Node old-k8s-version-339286 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [05f0737bca264f7f63b51b5b41958d7c656b10eb4e6383035b2181dc9b6cf531] <==
	{"level":"info","ts":"2025-11-08T09:16:52.879062Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 switched to configuration voters=(17451554867067011209)"}
	{"level":"info","ts":"2025-11-08T09:16:52.879195Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","added-peer-id":"f23060b075c4c089","added-peer-peer-urls":["https://192.168.103.2:2380"]}
	{"level":"info","ts":"2025-11-08T09:16:52.879362Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:16:52.879377Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"3336683c081d149d","local-member-id":"f23060b075c4c089","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:16:52.879404Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-11-08T09:16:52.879473Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-11-08T09:16:52.883491Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-11-08T09:16:52.883724Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"f23060b075c4c089","initial-advertise-peer-urls":["https://192.168.103.2:2380"],"listen-peer-urls":["https://192.168.103.2:2380"],"advertise-client-urls":["https://192.168.103.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.103.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-11-08T09:16:52.883752Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-11-08T09:16:52.88389Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-08T09:16:52.883935Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.103.2:2380"}
	{"level":"info","ts":"2025-11-08T09:16:54.069737Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 is starting a new election at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069843Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgPreVoteResp from f23060b075c4c089 at term 2"}
	{"level":"info","ts":"2025-11-08T09:16:54.069862Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became candidate at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069869Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 received MsgVoteResp from f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069879Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f23060b075c4c089 became leader at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.069889Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f23060b075c4c089 elected leader f23060b075c4c089 at term 3"}
	{"level":"info","ts":"2025-11-08T09:16:54.070926Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f23060b075c4c089","local-member-attributes":"{Name:old-k8s-version-339286 ClientURLs:[https://192.168.103.2:2379]}","request-path":"/0/members/f23060b075c4c089/attributes","cluster-id":"3336683c081d149d","publish-timeout":"7s"}
	{"level":"info","ts":"2025-11-08T09:16:54.070933Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:16:54.070958Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-11-08T09:16:54.071186Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-11-08T09:16:54.071217Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-11-08T09:16:54.072225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.103.2:2379"}
	{"level":"info","ts":"2025-11-08T09:16:54.072225Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 09:17:54 up  1:00,  0 user,  load average: 4.26, 3.97, 2.60
	Linux old-k8s-version-339286 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [7fd46eea766854907abc014be16bd2d636925caf5dc40c846854d2596d5eb35b] <==
	I1108 09:16:55.917297       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:16:55.917588       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:16:55.917762       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:16:55.917780       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:16:55.917799       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:16:56Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:16:56.213156       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:16:56.213182       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:16:56.213195       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:16:56.214129       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:16:56.787356       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:16:56.787401       1 metrics.go:72] Registering metrics
	I1108 09:16:56.787472       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:06.213455       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:06.213535       1 main.go:301] handling current node
	I1108 09:17:16.213394       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:16.213482       1 main.go:301] handling current node
	I1108 09:17:26.213866       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:26.213912       1 main.go:301] handling current node
	I1108 09:17:36.213345       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:36.213403       1 main.go:301] handling current node
	I1108 09:17:46.213313       1 main.go:297] Handling node with IPs: map[192.168.103.2:{}]
	I1108 09:17:46.213344       1 main.go:301] handling current node
	
	
	==> kube-apiserver [98d55dc91e4cc5e33d70693f9526f6aa60b212a464cedaf28800663629becec9] <==
	I1108 09:16:55.007613       1 crd_finalizer.go:266] Starting CRDFinalizer
	I1108 09:16:55.043749       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:16:55.053843       1 shared_informer.go:318] Caches are synced for node_authorizer
	I1108 09:16:55.109175       1 shared_informer.go:318] Caches are synced for configmaps
	I1108 09:16:55.109317       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I1108 09:16:55.109342       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I1108 09:16:55.109309       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I1108 09:16:55.109368       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I1108 09:16:55.109258       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I1108 09:16:55.109379       1 aggregator.go:166] initial CRD sync complete...
	I1108 09:16:55.109390       1 autoregister_controller.go:141] Starting autoregister controller
	I1108 09:16:55.109397       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:16:55.109405       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:16:55.109672       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:16:55.991242       1 controller.go:624] quota admission added evaluator for: namespaces
	I1108 09:16:56.007135       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:16:56.029338       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I1108 09:16:56.054212       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:16:56.064108       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:16:56.074490       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I1108 09:16:56.126986       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.255.85"}
	I1108 09:16:56.143939       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.107.2.160"}
	I1108 09:17:07.319055       1 controller.go:624] quota admission added evaluator for: endpoints
	I1108 09:17:07.367223       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I1108 09:17:07.529973       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [5f7fc9875b5fc7556f1ac83d8021344a544c674dc9f5c94000db6e9658a05653] <==
	I1108 09:17:07.385885       1 event.go:307] "Event occurred" object="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kubernetes-dashboard-8694d4445c-tt95r"
	I1108 09:17:07.392787       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="21.222099ms"
	I1108 09:17:07.396776       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="20.196831ms"
	I1108 09:17:07.399767       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="6.927264ms"
	I1108 09:17:07.399864       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="41.931µs"
	I1108 09:17:07.402270       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="5.440447ms"
	I1108 09:17:07.402406       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="58.96µs"
	I1108 09:17:07.409179       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="57.703µs"
	I1108 09:17:07.417108       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="49.211µs"
	I1108 09:17:07.470396       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:17:07.517917       1 shared_informer.go:318] Caches are synced for endpoint_slice
	I1108 09:17:07.533639       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I1108 09:17:07.549390       1 shared_informer.go:318] Caches are synced for resource quota
	I1108 09:17:07.886799       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:17:07.900391       1 shared_informer.go:318] Caches are synced for garbage collector
	I1108 09:17:07.900457       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I1108 09:17:11.423994       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="15.561707ms"
	I1108 09:17:11.424963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-8694d4445c" duration="863.775µs"
	I1108 09:17:14.414119       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="1.784916ms"
	I1108 09:17:15.422883       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="55.156µs"
	I1108 09:17:16.471936       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="88.136µs"
	I1108 09:17:29.458911       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="79.732µs"
	I1108 09:17:35.361571       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="6.919924ms"
	I1108 09:17:35.361693       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="74.518µs"
	I1108 09:17:37.711240       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf" duration="78.937µs"
	
	
	==> kube-proxy [7328412edf383ebc9fbee37e5106e103265cecd11ef6e4b37aad9fc4ef5afa30] <==
	I1108 09:16:55.790498       1 server_others.go:69] "Using iptables proxy"
	I1108 09:16:55.801546       1 node.go:141] Successfully retrieved node IP: 192.168.103.2
	I1108 09:16:55.819900       1 server.go:632] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:16:55.822220       1 server_others.go:152] "Using iptables Proxier"
	I1108 09:16:55.822251       1 server_others.go:421] "Detect-local-mode set to ClusterCIDR, but no cluster CIDR for family" ipFamily="IPv6"
	I1108 09:16:55.822259       1 server_others.go:438] "Defaulting to no-op detect-local"
	I1108 09:16:55.822324       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I1108 09:16:55.822571       1 server.go:846] "Version info" version="v1.28.0"
	I1108 09:16:55.822590       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:55.823231       1 config.go:188] "Starting service config controller"
	I1108 09:16:55.823269       1 shared_informer.go:311] Waiting for caches to sync for service config
	I1108 09:16:55.823304       1 config.go:97] "Starting endpoint slice config controller"
	I1108 09:16:55.823317       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I1108 09:16:55.823395       1 config.go:315] "Starting node config controller"
	I1108 09:16:55.823412       1 shared_informer.go:311] Waiting for caches to sync for node config
	I1108 09:16:55.924444       1 shared_informer.go:318] Caches are synced for node config
	I1108 09:16:55.924478       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I1108 09:16:55.924468       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-scheduler [b110ac2f6aa3af2724fee2a70005a78d6d94180425eb8c585f94cc26ee06c01d] <==
	I1108 09:16:53.226438       1 serving.go:348] Generated self-signed cert in-memory
	I1108 09:16:55.063991       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.0"
	I1108 09:16:55.064019       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:16:55.067610       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I1108 09:16:55.067630       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:16:55.067658       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 09:16:55.067633       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I1108 09:16:55.067627       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:16:55.067782       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1108 09:16:55.068496       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I1108 09:16:55.068741       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I1108 09:16:55.167979       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	I1108 09:16:55.168012       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I1108 09:16:55.168025       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Nov 08 09:17:07 old-k8s-version-339286 kubelet[725]: I1108 09:17:07.508795     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-585gq\" (UniqueName: \"kubernetes.io/projected/598b85f9-cf83-45bf-ac00-667cae766168-kube-api-access-585gq\") pod \"dashboard-metrics-scraper-5f989dc9cf-2xgql\" (UID: \"598b85f9-cf83-45bf-ac00-667cae766168\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql"
	Nov 08 09:17:07 old-k8s-version-339286 kubelet[725]: I1108 09:17:07.508851     725 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/cb245aae-48cc-4ddb-bd6a-375932d5804e-tmp-volume\") pod \"kubernetes-dashboard-8694d4445c-tt95r\" (UID: \"cb245aae-48cc-4ddb-bd6a-375932d5804e\") " pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tt95r"
	Nov 08 09:17:11 old-k8s-version-339286 kubelet[725]: I1108 09:17:11.411154     725 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-8694d4445c-tt95r" podStartSLOduration=1.5051657920000001 podCreationTimestamp="2025-11-08 09:17:07 +0000 UTC" firstStartedPulling="2025-11-08 09:17:07.729024647 +0000 UTC m=+15.492957837" lastFinishedPulling="2025-11-08 09:17:10.634926191 +0000 UTC m=+18.398859389" observedRunningTime="2025-11-08 09:17:11.410578171 +0000 UTC m=+19.174511384" watchObservedRunningTime="2025-11-08 09:17:11.411067344 +0000 UTC m=+19.175000555"
	Nov 08 09:17:14 old-k8s-version-339286 kubelet[725]: I1108 09:17:14.398097     725 scope.go:117] "RemoveContainer" containerID="61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: I1108 09:17:15.403659     725 scope.go:117] "RemoveContainer" containerID="61cd7271b63baee9d5e3e8e07c0f7eeb1cb6739784069379b5826c04ab49914d"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: I1108 09:17:15.403727     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:15 old-k8s-version-339286 kubelet[725]: E1108 09:17:15.404713     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:16 old-k8s-version-339286 kubelet[725]: I1108 09:17:16.407252     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:16 old-k8s-version-339286 kubelet[725]: E1108 09:17:16.407574     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:17 old-k8s-version-339286 kubelet[725]: I1108 09:17:17.698592     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:17 old-k8s-version-339286 kubelet[725]: E1108 09:17:17.698993     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:26 old-k8s-version-339286 kubelet[725]: I1108 09:17:26.435015     725 scope.go:117] "RemoveContainer" containerID="40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.324352     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.446372     725 scope.go:117] "RemoveContainer" containerID="ef547c33ad62317bd67cc4a0ae2661e23ec8022758ccdbf29e2052a171b56391"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: I1108 09:17:29.446595     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:29 old-k8s-version-339286 kubelet[725]: E1108 09:17:29.447069     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:37 old-k8s-version-339286 kubelet[725]: I1108 09:17:37.699349     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:37 old-k8s-version-339286 kubelet[725]: E1108 09:17:37.699739     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: I1108 09:17:49.323711     725 scope.go:117] "RemoveContainer" containerID="a316eac5d63e2b1f90a088f4ee5d89d5100713b9e774849095a9227ced5b4857"
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: E1108 09:17:49.323981     725 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5f989dc9cf-2xgql_kubernetes-dashboard(598b85f9-cf83-45bf-ac00-667cae766168)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-5f989dc9cf-2xgql" podUID="598b85f9-cf83-45bf-ac00-667cae766168"
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:49 old-k8s-version-339286 kubelet[725]: I1108 09:17:49.353205     725 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:49 old-k8s-version-339286 systemd[1]: kubelet.service: Consumed 1.643s CPU time.
	
	
	==> kubernetes-dashboard [03cf5adcdb2bd89563eab50522293021aed573d100ffd0206d694d31bcf28fbd] <==
	2025/11/08 09:17:10 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:10 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:10 Using secret token for csrf signing
	2025/11/08 09:17:10 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:10 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:10 Successful initial request to the apiserver, version: v1.28.0
	2025/11/08 09:17:10 Generating JWE encryption key
	2025/11/08 09:17:10 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:10 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:11 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:11 Creating in-cluster Sidecar client
	2025/11/08 09:17:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:11 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:10 Starting overwatch
	
	
	==> storage-provisioner [40c5750e71e717fee4e2f434005577d094451c2b7d1a03801d27740a554e3125] <==
	I1108 09:16:55.738118       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:25.740806       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [a6b3caa95b08e9cc59325cf70666eca49ecb97b47f1a0ac4d7ba2fbc4e45b7f5] <==
	I1108 09:17:26.480705       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:26.488466       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:26.488518       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1108 09:17:43.885587       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:43.885669       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6c63ab52-f89e-4357-9f41-9364b79d256c", APIVersion:"v1", ResourceVersion:"656", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b became leader
	I1108 09:17:43.885751       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b!
	I1108 09:17:43.986633       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-339286_0df6ce31-3356-4615-8cf6-d4e30cc5072b!
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-339286 -n old-k8s-version-339286
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-339286 -n old-k8s-version-339286: exit status 2 (322.267353ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context old-k8s-version-339286 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (6.27s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (5.91s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-220714 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p no-preload-220714 --alsologtostderr -v=1: exit status 80 (1.690952261s)

                                                
                                                
-- stdout --
	* Pausing node no-preload-220714 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:55.634777  323854 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:55.635031  323854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:55.635040  323854 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:55.635044  323854 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:55.635312  323854 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:55.635548  323854 out.go:368] Setting JSON to false
	I1108 09:17:55.635594  323854 mustload.go:66] Loading cluster: no-preload-220714
	I1108 09:17:55.635944  323854 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:55.636357  323854 cli_runner.go:164] Run: docker container inspect no-preload-220714 --format={{.State.Status}}
	I1108 09:17:55.657980  323854 host.go:66] Checking if "no-preload-220714" exists ...
	I1108 09:17:55.658305  323854 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:55.739984  323854 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-08 09:17:55.723977035 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:55.740857  323854 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:no-preload-220714 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:17:55.743506  323854 out.go:179] * Pausing node no-preload-220714 ... 
	I1108 09:17:55.745048  323854 host.go:66] Checking if "no-preload-220714" exists ...
	I1108 09:17:55.745694  323854 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:55.745735  323854 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" no-preload-220714
	I1108 09:17:55.767225  323854 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33119 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/no-preload-220714/id_rsa Username:docker}
	I1108 09:17:55.862204  323854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:55.875117  323854 pause.go:52] kubelet running: true
	I1108 09:17:55.875218  323854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:56.033115  323854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:56.033231  323854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:56.099757  323854 cri.go:89] found id: "f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390"
	I1108 09:17:56.099793  323854 cri.go:89] found id: "72ca87c29be3674615ea2310d4ce35a28bf6902372ccf3cbb64b6fe5342d5828"
	I1108 09:17:56.099800  323854 cri.go:89] found id: "d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c"
	I1108 09:17:56.099803  323854 cri.go:89] found id: "4e7e706219a205afe2dd065a3554c1ca7e78cbfdc9f409f62564c8b6003a136a"
	I1108 09:17:56.099805  323854 cri.go:89] found id: "aaae2304edaccb39caba0aedbe6bbbc27ae6f9630f3040981499827f3ad62365"
	I1108 09:17:56.099808  323854 cri.go:89] found id: "fa188b4b7f4f29c847e9cf3900671c80c7e9ffcd91d763bb30db0af0b6fd9ba0"
	I1108 09:17:56.099811  323854 cri.go:89] found id: "3d9b01a52911c8c96e384a546564442fe2555748e1f47f6bb05707a71fd1044d"
	I1108 09:17:56.099816  323854 cri.go:89] found id: "0148d2ce10edecc0211c834c9a26268deafc67cddc66903beb3c4616c9e69ba2"
	I1108 09:17:56.099820  323854 cri.go:89] found id: "dd10a08245ba675e73bc1f27d7645f3fd56047f90ecce3473b696401526ae0a3"
	I1108 09:17:56.099827  323854 cri.go:89] found id: "ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	I1108 09:17:56.099831  323854 cri.go:89] found id: "06b628afd654f553c9e29b039e937c82ba40dcc65ce40079884e2c2dd706cfbf"
	I1108 09:17:56.099835  323854 cri.go:89] found id: ""
	I1108 09:17:56.099873  323854 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:56.111572  323854 retry.go:31] will retry after 149.848414ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:56Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:56.262257  323854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:56.279248  323854 pause.go:52] kubelet running: false
	I1108 09:17:56.279353  323854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:56.425054  323854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:56.425141  323854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:56.493039  323854 cri.go:89] found id: "f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390"
	I1108 09:17:56.493066  323854 cri.go:89] found id: "72ca87c29be3674615ea2310d4ce35a28bf6902372ccf3cbb64b6fe5342d5828"
	I1108 09:17:56.493077  323854 cri.go:89] found id: "d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c"
	I1108 09:17:56.493084  323854 cri.go:89] found id: "4e7e706219a205afe2dd065a3554c1ca7e78cbfdc9f409f62564c8b6003a136a"
	I1108 09:17:56.493088  323854 cri.go:89] found id: "aaae2304edaccb39caba0aedbe6bbbc27ae6f9630f3040981499827f3ad62365"
	I1108 09:17:56.493093  323854 cri.go:89] found id: "fa188b4b7f4f29c847e9cf3900671c80c7e9ffcd91d763bb30db0af0b6fd9ba0"
	I1108 09:17:56.493097  323854 cri.go:89] found id: "3d9b01a52911c8c96e384a546564442fe2555748e1f47f6bb05707a71fd1044d"
	I1108 09:17:56.493102  323854 cri.go:89] found id: "0148d2ce10edecc0211c834c9a26268deafc67cddc66903beb3c4616c9e69ba2"
	I1108 09:17:56.493106  323854 cri.go:89] found id: "dd10a08245ba675e73bc1f27d7645f3fd56047f90ecce3473b696401526ae0a3"
	I1108 09:17:56.493125  323854 cri.go:89] found id: "ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	I1108 09:17:56.493134  323854 cri.go:89] found id: "06b628afd654f553c9e29b039e937c82ba40dcc65ce40079884e2c2dd706cfbf"
	I1108 09:17:56.493137  323854 cri.go:89] found id: ""
	I1108 09:17:56.493179  323854 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:56.505316  323854 retry.go:31] will retry after 479.765649ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:56Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:56.986016  323854 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:57.000258  323854 pause.go:52] kubelet running: false
	I1108 09:17:57.000333  323854 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:57.161701  323854 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:57.161820  323854 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:57.237149  323854 cri.go:89] found id: "f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390"
	I1108 09:17:57.237173  323854 cri.go:89] found id: "72ca87c29be3674615ea2310d4ce35a28bf6902372ccf3cbb64b6fe5342d5828"
	I1108 09:17:57.237178  323854 cri.go:89] found id: "d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c"
	I1108 09:17:57.237181  323854 cri.go:89] found id: "4e7e706219a205afe2dd065a3554c1ca7e78cbfdc9f409f62564c8b6003a136a"
	I1108 09:17:57.237185  323854 cri.go:89] found id: "aaae2304edaccb39caba0aedbe6bbbc27ae6f9630f3040981499827f3ad62365"
	I1108 09:17:57.237189  323854 cri.go:89] found id: "fa188b4b7f4f29c847e9cf3900671c80c7e9ffcd91d763bb30db0af0b6fd9ba0"
	I1108 09:17:57.237193  323854 cri.go:89] found id: "3d9b01a52911c8c96e384a546564442fe2555748e1f47f6bb05707a71fd1044d"
	I1108 09:17:57.237197  323854 cri.go:89] found id: "0148d2ce10edecc0211c834c9a26268deafc67cddc66903beb3c4616c9e69ba2"
	I1108 09:17:57.237210  323854 cri.go:89] found id: "dd10a08245ba675e73bc1f27d7645f3fd56047f90ecce3473b696401526ae0a3"
	I1108 09:17:57.237230  323854 cri.go:89] found id: "ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	I1108 09:17:57.237238  323854 cri.go:89] found id: "06b628afd654f553c9e29b039e937c82ba40dcc65ce40079884e2c2dd706cfbf"
	I1108 09:17:57.237242  323854 cri.go:89] found id: ""
	I1108 09:17:57.237316  323854 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:57.254817  323854 out.go:203] 
	W1108 09:17:57.256122  323854 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:57Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:57Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:57.256148  323854 out.go:285] * 
	* 
	W1108 09:17:57.260379  323854 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:57.261828  323854 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p no-preload-220714 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-220714
helpers_test.go:243: (dbg) docker inspect no-preload-220714:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	        "Created": "2025-11-08T09:15:34.135970344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313279,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:57.204750329Z",
	            "FinishedAt": "2025-11-08T09:16:56.218398591Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hostname",
	        "HostsPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hosts",
	        "LogPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d-json.log",
	        "Name": "/no-preload-220714",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-220714:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-220714",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	                "LowerDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-220714",
	                "Source": "/var/lib/docker/volumes/no-preload-220714/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-220714",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-220714",
	                "name.minikube.sigs.k8s.io": "no-preload-220714",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f0b6b36b3f9af9f5510613c5bcde5880c452f9f30b9841f6fb92a0a0ff403bf",
	            "SandboxKey": "/var/run/docker/netns/5f0b6b36b3f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-220714": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:c6:c7:d6:96:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2c6206fd83352e5892c70867654eb8c3127b66df1d3abb8d7e06c7e601cea52",
	                    "EndpointID": "d60f6e2cd683b473df7bdfb84b75c5edf9cd71ce9b2e213e7a180e57261d02cb",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-220714",
	                        "446e9eda1361"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714: exit status 2 (356.979074ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-220714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-220714 logs -n 25: (1.237635812s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ delete  │ -p disable-driver-mounts-010877                                                                                                                                                                                                               │ disable-driver-mounts-010877 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:23
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:23.014181  318772 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:23.014490  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014501  318772 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:23.014506  318772 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:23.014688  318772 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:23.015160  318772 out.go:368] Setting JSON to false
	I1108 09:17:23.016473  318772 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3594,"bootTime":1762589849,"procs":358,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:23.016562  318772 start.go:143] virtualization: kvm guest
	I1108 09:17:23.018650  318772 out.go:179] * [default-k8s-diff-port-677902] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:23.020167  318772 notify.go:221] Checking for updates...
	I1108 09:17:23.020234  318772 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:23.021653  318772 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:23.023193  318772 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:23.024687  318772 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:23.026129  318772 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:23.027502  318772 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:23.029342  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:23.029838  318772 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:23.055123  318772 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:23.055259  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.110228  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.100330014 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.110439  318772 docker.go:319] overlay module found
	I1108 09:17:23.112516  318772 out.go:179] * Using the docker driver based on existing profile
	I1108 09:17:23.113842  318772 start.go:309] selected driver: docker
	I1108 09:17:23.113858  318772 start.go:930] validating driver "docker" against &{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APISe
rverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.113935  318772 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:23.114523  318772 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:23.170233  318772 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:74 OomKillDisable:false NGoroutines:77 SystemTime:2025-11-08 09:17:23.160701234 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:23.170557  318772 start_flags.go:992] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:23.170587  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:23.170630  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:23.170681  318772 start.go:353] cluster config:
	{Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:
cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false
DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:23.173037  318772 out.go:179] * Starting "default-k8s-diff-port-677902" primary control-plane node in "default-k8s-diff-port-677902" cluster
	I1108 09:17:23.174652  318772 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:23.176085  318772 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:23.177478  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:23.177520  318772 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:23.177527  318772 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:23.177553  318772 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:23.177617  318772 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:23.177632  318772 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:23.177725  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.200331  318772 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:23.200356  318772 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:23.200379  318772 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:23.200409  318772 start.go:360] acquireMachinesLock for default-k8s-diff-port-677902: {Name:mk526669374d724485de61415f0aa79950bc7fa6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:23.200478  318772 start.go:364] duration metric: took 44.108µs to acquireMachinesLock for "default-k8s-diff-port-677902"
	I1108 09:17:23.200502  318772 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:17:23.200508  318772 fix.go:54] fixHost starting: 
	I1108 09:17:23.200797  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.222078  318772 fix.go:112] recreateIfNeeded on default-k8s-diff-port-677902: state=Stopped err=<nil>
	W1108 09:17:23.222126  318772 fix.go:138] unexpected machine state, will restart: <nil>
	W1108 09:17:23.215019  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:25.215267  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:21.423354  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:23.921916  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:25.922381  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:22.022599  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:24.467748  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:26.467970  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:23.223920  318772 out.go:252] * Restarting existing docker container for "default-k8s-diff-port-677902" ...
	I1108 09:17:23.224026  318772 cli_runner.go:164] Run: docker start default-k8s-diff-port-677902
	I1108 09:17:23.517410  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:23.541523  318772 kic.go:430] container "default-k8s-diff-port-677902" state is running.
	I1108 09:17:23.542096  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:23.566822  318772 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/config.json ...
	I1108 09:17:23.567040  318772 machine.go:94] provisionDockerMachine start ...
	I1108 09:17:23.567111  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:23.587476  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:23.587789  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:23.587807  318772 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:17:23.588482  318772 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:43940->127.0.0.1:33124: read: connection reset by peer
	I1108 09:17:26.720488  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.720521  318772 ubuntu.go:182] provisioning hostname "default-k8s-diff-port-677902"
	I1108 09:17:26.720581  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.739702  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.739910  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.739923  318772 main.go:143] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-677902 && echo "default-k8s-diff-port-677902" | sudo tee /etc/hostname
	I1108 09:17:26.879756  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-677902
	
	I1108 09:17:26.879827  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:26.900874  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:26.901124  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:26.901145  318772 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-677902' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-677902/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-677902' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:17:27.030475  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:17:27.030504  318772 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:17:27.030544  318772 ubuntu.go:190] setting up certificates
	I1108 09:17:27.030558  318772 provision.go:84] configureAuth start
	I1108 09:17:27.030617  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.049655  318772 provision.go:143] copyHostCerts
	I1108 09:17:27.049718  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:17:27.049734  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:17:27.049821  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:17:27.049958  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:17:27.049978  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:17:27.050022  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:17:27.050114  318772 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:17:27.050123  318772 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:17:27.050149  318772 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:17:27.050225  318772 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-677902 san=[127.0.0.1 192.168.76.2 default-k8s-diff-port-677902 localhost minikube]
	I1108 09:17:27.218430  318772 provision.go:177] copyRemoteCerts
	I1108 09:17:27.218485  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:17:27.218517  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.238620  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.334066  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:17:27.353472  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I1108 09:17:27.371621  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1108 09:17:27.389736  318772 provision.go:87] duration metric: took 359.161729ms to configureAuth
	I1108 09:17:27.389766  318772 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:17:27.389969  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:27.390099  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.408638  318772 main.go:143] libmachine: Using SSH client type: native
	I1108 09:17:27.408840  318772 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33124 <nil> <nil>}
	I1108 09:17:27.408855  318772 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:17:27.700508  318772 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:17:27.700535  318772 machine.go:97] duration metric: took 4.133482649s to provisionDockerMachine
	I1108 09:17:27.700549  318772 start.go:293] postStartSetup for "default-k8s-diff-port-677902" (driver="docker")
	I1108 09:17:27.700562  318772 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:17:27.700637  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:17:27.700708  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.722016  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.818358  318772 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:17:27.822257  318772 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:17:27.822295  318772 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:17:27.822309  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:17:27.822368  318772 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:17:27.822472  318772 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:17:27.822590  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:17:27.830681  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:27.849577  318772 start.go:296] duration metric: took 149.013814ms for postStartSetup
	I1108 09:17:27.849653  318772 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:17:27.849714  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.869059  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:27.960711  318772 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:17:27.965862  318772 fix.go:56] duration metric: took 4.765347999s for fixHost
	I1108 09:17:27.965889  318772 start.go:83] releasing machines lock for "default-k8s-diff-port-677902", held for 4.765396741s
	I1108 09:17:27.965955  318772 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" default-k8s-diff-port-677902
	I1108 09:17:27.984988  318772 ssh_runner.go:195] Run: cat /version.json
	I1108 09:17:27.985031  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:27.985093  318772 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:17:27.985177  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:28.004610  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.004907  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:28.149001  318772 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:28.155580  318772 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:17:28.192252  318772 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:17:28.197116  318772 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:17:28.197175  318772 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:17:28.205203  318772 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:17:28.205224  318772 start.go:496] detecting cgroup driver to use...
	I1108 09:17:28.205255  318772 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:17:28.205303  318772 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:17:28.220826  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:17:28.234319  318772 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:17:28.234394  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:17:28.249292  318772 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:17:28.262217  318772 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:17:28.343998  318772 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:17:28.425777  318772 docker.go:234] disabling docker service ...
	I1108 09:17:28.425843  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:17:28.440815  318772 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:17:28.455138  318772 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:17:28.537601  318772 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:17:28.622788  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:17:28.635585  318772 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:17:28.649621  318772 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:17:28.649672  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.659171  318772 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:17:28.659244  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.668583  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.677393  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.686251  318772 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:17:28.694982  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.704557  318772 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.713519  318772 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:17:28.723588  318772 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:17:28.731786  318772 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:17:28.739658  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:28.823880  318772 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:17:28.925939  318772 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:17:28.926009  318772 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:17:28.930260  318772 start.go:564] Will wait 60s for crictl version
	I1108 09:17:28.930332  318772 ssh_runner.go:195] Run: which crictl
	I1108 09:17:28.934146  318772 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:17:28.959101  318772 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:17:28.959184  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:28.987183  318772 ssh_runner.go:195] Run: crio --version
	I1108 09:17:29.017768  318772 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:17:29.019019  318772 cli_runner.go:164] Run: docker network inspect default-k8s-diff-port-677902 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:29.036798  318772 ssh_runner.go:195] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I1108 09:17:29.041036  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.051759  318772 kubeadm.go:884] updating cluster {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APISer
verName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:17:29.051887  318772 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:29.051933  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.084447  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.084468  318772 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:17:29.084512  318772 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:17:29.110976  318772 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:17:29.111002  318772 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:17:29.111018  318772 kubeadm.go:935] updating node { 192.168.76.2 8444 v1.34.1 crio true true} ...
	I1108 09:17:29.111172  318772 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=default-k8s-diff-port-677902 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:17:29.111249  318772 ssh_runner.go:195] Run: crio config
	I1108 09:17:29.155244  318772 cni.go:84] Creating CNI manager for ""
	I1108 09:17:29.155266  318772 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:29.155307  318772 kubeadm.go:85] Using pod CIDR: 10.244.0.0/16
	I1108 09:17:29.155338  318772 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8444 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-677902 NodeName:default-k8s-diff-port-677902 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.76.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.c
rt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:17:29.155495  318772 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-677902"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.76.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:17:29.155551  318772 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:17:29.163669  318772 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:17:29.163736  318772 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:17:29.171252  318772 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (378 bytes)
	I1108 09:17:29.184573  318772 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:17:29.196971  318772 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2224 bytes)
	I1108 09:17:29.209695  318772 ssh_runner.go:195] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:17:29.213735  318772 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:17:29.224550  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:29.306727  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:29.333961  318772 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902 for IP: 192.168.76.2
	I1108 09:17:29.333990  318772 certs.go:195] generating shared ca certs ...
	I1108 09:17:29.334022  318772 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:29.334192  318772 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:17:29.334258  318772 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:17:29.334275  318772 certs.go:257] generating profile certs ...
	I1108 09:17:29.334443  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/client.key
	I1108 09:17:29.334517  318772 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key.36d5c273
	I1108 09:17:29.334567  318772 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key
	I1108 09:17:29.334703  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:17:29.334750  318772 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:17:29.334763  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:17:29.334800  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:17:29.334836  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:17:29.334868  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:17:29.334923  318772 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:17:29.335755  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:17:29.358546  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:17:29.382353  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:17:29.403720  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:17:29.426530  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I1108 09:17:29.450442  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:17:29.471845  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:17:29.489173  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/default-k8s-diff-port-677902/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:17:29.506582  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:17:29.524071  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:17:29.543268  318772 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:17:29.561916  318772 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:17:29.574785  318772 ssh_runner.go:195] Run: openssl version
	I1108 09:17:29.581198  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:17:29.590123  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593890  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.593942  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:17:29.629344  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:17:29.637798  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:17:29.646788  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650810  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.650886  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:17:29.686144  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:17:29.694870  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:17:29.704343  318772 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708244  318772 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.708301  318772 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:17:29.747154  318772 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:17:29.756245  318772 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:17:29.760208  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:17:29.798830  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:17:29.835366  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:17:29.881735  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:17:29.926935  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:17:29.975380  318772 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:17:30.025917  318772 kubeadm.go:401] StartCluster: {Name:default-k8s-diff-port-677902 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:default-k8s-diff-port-677902 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:30.026024  318772 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:17:30.026120  318772 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:17:30.057401  318772 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:17:30.057427  318772 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:17:30.057433  318772 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:17:30.057439  318772 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:17:30.057447  318772 cri.go:89] found id: ""
	I1108 09:17:30.057485  318772 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:17:30.069676  318772 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:30Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:30.069736  318772 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:17:30.078414  318772 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:17:30.078433  318772 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:17:30.078477  318772 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:17:30.086093  318772 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:17:30.087564  318772 kubeconfig.go:47] verify endpoint returned: get endpoint: "default-k8s-diff-port-677902" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.088577  318772 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5860/kubeconfig needs updating (will repair): [kubeconfig missing "default-k8s-diff-port-677902" cluster setting kubeconfig missing "default-k8s-diff-port-677902" context setting]
	I1108 09:17:30.089991  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.092252  318772 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:17:30.100764  318772 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.76.2
	I1108 09:17:30.100804  318772 kubeadm.go:602] duration metric: took 22.36077ms to restartPrimaryControlPlane
	I1108 09:17:30.100814  318772 kubeadm.go:403] duration metric: took 74.907828ms to StartCluster
	I1108 09:17:30.100831  318772 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.100935  318772 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:30.103426  318772 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:30.103692  318772 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.76.2 Port:8444 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:30.103761  318772 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:17:30.103862  318772 addons.go:70] Setting storage-provisioner=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.103881  318772 addons.go:239] Setting addon storage-provisioner=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.103890  318772 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:17:30.103917  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.103945  318772 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:30.103995  318772 addons.go:70] Setting dashboard=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104069  318772 addons.go:70] Setting default-storageclass=true in profile "default-k8s-diff-port-677902"
	I1108 09:17:30.104010  318772 addons.go:239] Setting addon dashboard=true in "default-k8s-diff-port-677902"
	I1108 09:17:30.104098  318772 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-677902"
	W1108 09:17:30.104104  318772 addons.go:248] addon dashboard should already be in state true
	I1108 09:17:30.104134  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.104426  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104485  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.104734  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.126565  318772 out.go:179] * Verifying Kubernetes components...
	I1108 09:17:30.128137  318772 addons.go:239] Setting addon default-storageclass=true in "default-k8s-diff-port-677902"
	W1108 09:17:30.128160  318772 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:17:30.128186  318772 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:17:30.128648  318772 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:17:30.129880  318772 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:17:30.131252  318772 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:17:30.131276  318772 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:17:30.134171  318772 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.134193  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:17:30.134249  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.134885  318772 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	W1108 09:17:27.715578  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:30.215915  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:27.923031  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:29.925443  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:28.968151  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:31.470050  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:30.138683  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:17:30.138707  318772 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:17:30.138768  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.155528  318772 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.155552  318772 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:17:30.155610  318772 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:17:30.159215  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.161996  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.184265  318772 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:17:30.283069  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:17:30.283103  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:17:30.283635  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:17:30.294542  318772 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:17:30.295994  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:17:30.301109  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:17:30.301130  318772 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:17:30.321171  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:17:30.321197  318772 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:17:30.339306  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:17:30.339332  318772 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:17:30.353887  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:17:30.353939  318772 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:17:30.367921  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:17:30.367943  318772 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:17:30.380743  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:17:30.380768  318772 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:17:30.393662  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:17:30.393688  318772 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:17:30.407461  318772 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:30.407490  318772 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:17:30.422749  318772 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:17:32.507801  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.224127884s)
	I1108 09:17:32.507827  318772 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.213247429s)
	I1108 09:17:32.507867  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.211842649s)
	I1108 09:17:32.507875  318772 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.508003  318772 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (2.085193451s)
	I1108 09:17:32.510165  318772 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p default-k8s-diff-port-677902 addons enable metrics-server
	
	I1108 09:17:32.518886  318772 node_ready.go:49] node "default-k8s-diff-port-677902" is "Ready"
	I1108 09:17:32.518917  318772 node_ready.go:38] duration metric: took 11.026405ms for node "default-k8s-diff-port-677902" to be "Ready" ...
	I1108 09:17:32.518932  318772 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:17:32.518979  318772 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:17:32.524408  318772 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:17:32.525554  318772 addons.go:515] duration metric: took 2.421802346s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:17:32.534116  318772 api_server.go:72] duration metric: took 2.430387553s to wait for apiserver process to appear ...
	I1108 09:17:32.534161  318772 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:17:32.534186  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:32.538878  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.538905  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:32.714163  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	W1108 09:17:34.715192  310009 pod_ready.go:104] pod "coredns-5dd5756b68-88pvx" is not "Ready", error: <nil>
	I1108 09:17:35.715420  310009 pod_ready.go:94] pod "coredns-5dd5756b68-88pvx" is "Ready"
	I1108 09:17:35.715446  310009 pod_ready.go:86] duration metric: took 39.006203091s for pod "coredns-5dd5756b68-88pvx" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.718113  310009 pod_ready.go:83] waiting for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.721921  310009 pod_ready.go:94] pod "etcd-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.721942  310009 pod_ready.go:86] duration metric: took 3.80625ms for pod "etcd-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.724454  310009 pod_ready.go:83] waiting for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.728081  310009 pod_ready.go:94] pod "kube-apiserver-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.728098  310009 pod_ready.go:86] duration metric: took 3.62396ms for pod "kube-apiserver-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.730525  310009 pod_ready.go:83] waiting for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:35.914488  310009 pod_ready.go:94] pod "kube-controller-manager-old-k8s-version-339286" is "Ready"
	I1108 09:17:35.914516  310009 pod_ready.go:86] duration metric: took 183.97019ms for pod "kube-controller-manager-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:32.424544  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:34.922175  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:33.967021  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:35.967176  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	I1108 09:17:36.113947  310009 pod_ready.go:83] waiting for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.517018  310009 pod_ready.go:94] pod "kube-proxy-v4l6x" is "Ready"
	I1108 09:17:36.517049  310009 pod_ready.go:86] duration metric: took 403.07566ms for pod "kube-proxy-v4l6x" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:36.714683  310009 pod_ready.go:83] waiting for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115339  310009 pod_ready.go:94] pod "kube-scheduler-old-k8s-version-339286" is "Ready"
	I1108 09:17:37.115372  310009 pod_ready.go:86] duration metric: took 400.662562ms for pod "kube-scheduler-old-k8s-version-339286" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:37.115387  310009 pod_ready.go:40] duration metric: took 40.411019881s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:37.176895  310009 start.go:628] kubectl: 1.34.1, cluster: 1.28.0 (minor skew: 6)
	I1108 09:17:37.178443  310009 out.go:203] 
	W1108 09:17:37.180072  310009 out.go:285] ! /usr/local/bin/kubectl is version 1.34.1, which may have incompatibilities with Kubernetes 1.28.0.
	I1108 09:17:37.184774  310009 out.go:179]   - Want kubectl v1.28.0? Try 'minikube kubectl -- get pods -A'
	I1108 09:17:37.186452  310009 out.go:179] * Done! kubectl is now configured to use "old-k8s-version-339286" cluster and "default" namespace by default
	I1108 09:17:33.034301  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.039725  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:17:33.039752  318772 api_server.go:103] status: https://192.168.76.2:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:17:33.534363  318772 api_server.go:253] Checking apiserver healthz at https://192.168.76.2:8444/healthz ...
	I1108 09:17:33.538638  318772 api_server.go:279] https://192.168.76.2:8444/healthz returned 200:
	ok
	I1108 09:17:33.539622  318772 api_server.go:141] control plane version: v1.34.1
	I1108 09:17:33.539644  318772 api_server.go:131] duration metric: took 1.005476188s to wait for apiserver health ...
	I1108 09:17:33.539652  318772 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:17:33.542649  318772 system_pods.go:59] 8 kube-system pods found
	I1108 09:17:33.542678  318772 system_pods.go:61] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.542686  318772 system_pods.go:61] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.542693  318772 system_pods.go:61] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.542705  318772 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.542713  318772 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.542723  318772 system_pods.go:61] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.542730  318772 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.542734  318772 system_pods.go:61] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.542741  318772 system_pods.go:74] duration metric: took 3.082538ms to wait for pod list to return data ...
	I1108 09:17:33.542750  318772 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:17:33.545077  318772 default_sa.go:45] found service account: "default"
	I1108 09:17:33.545094  318772 default_sa.go:55] duration metric: took 2.339095ms for default service account to be created ...
	I1108 09:17:33.545103  318772 system_pods.go:116] waiting for k8s-apps to be running ...
	I1108 09:17:33.547820  318772 system_pods.go:86] 8 kube-system pods found
	I1108 09:17:33.547846  318772 system_pods.go:89] "coredns-66bc5c9577-x49dj" [ae1ab1f3-40b4-45c6-b11f-14695ad9bc3d] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I1108 09:17:33.547854  318772 system_pods.go:89] "etcd-default-k8s-diff-port-677902" [075b3604-f07a-4acb-8680-f000540900f5] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:17:33.547860  318772 system_pods.go:89] "kindnet-x89ph" [5f49623a-57d7-4854-8c1b-b4ca027bd24c] Running
	I1108 09:17:33.547867  318772 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-677902" [9787b81f-a90f-464b-8a61-d4ec701472f3] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:17:33.547875  318772 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-677902" [28070357-a633-4a19-a618-390b7a199a37] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:17:33.547879  318772 system_pods.go:89] "kube-proxy-5d9f2" [e880f62e-f713-4254-98e7-84f3941024f0] Running
	I1108 09:17:33.547884  318772 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-677902" [069d093e-35cb-4235-942b-cf15e67b9432] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:17:33.547889  318772 system_pods.go:89] "storage-provisioner" [00375859-41ff-4f26-b07f-73a5d30e46ee] Running
	I1108 09:17:33.547898  318772 system_pods.go:126] duration metric: took 2.79107ms to wait for k8s-apps to be running ...
	I1108 09:17:33.547906  318772 system_svc.go:44] waiting for kubelet service to be running ....
	I1108 09:17:33.547945  318772 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:33.561240  318772 system_svc.go:56] duration metric: took 13.32927ms WaitForService to wait for kubelet
	I1108 09:17:33.561268  318772 kubeadm.go:587] duration metric: took 3.457542806s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1108 09:17:33.561299  318772 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:17:33.563775  318772 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:17:33.563796  318772 node_conditions.go:123] node cpu capacity is 8
	I1108 09:17:33.563807  318772 node_conditions.go:105] duration metric: took 2.498943ms to run NodePressure ...
	I1108 09:17:33.563817  318772 start.go:242] waiting for startup goroutines ...
	I1108 09:17:33.563823  318772 start.go:247] waiting for cluster config update ...
	I1108 09:17:33.563833  318772 start.go:256] writing updated cluster config ...
	I1108 09:17:33.564106  318772 ssh_runner.go:195] Run: rm -f paused
	I1108 09:17:33.567850  318772 pod_ready.go:37] extra waiting up to 4m0s for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:33.571308  318772 pod_ready.go:83] waiting for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	W1108 09:17:35.577410  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:37.578193  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:36.923004  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:38.923619  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:40.924149  312299 pod_ready.go:104] pod "coredns-66bc5c9577-cbw4j" is not "Ready", error: <nil>
	W1108 09:17:37.973264  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:40.469582  313008 pod_ready.go:104] pod "coredns-66bc5c9577-zdb97" is not "Ready", error: <nil>
	W1108 09:17:39.578963  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:42.077248  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:42.467997  313008 pod_ready.go:94] pod "coredns-66bc5c9577-zdb97" is "Ready"
	I1108 09:17:42.468035  313008 pod_ready.go:86] duration metric: took 34.505824056s for pod "coredns-66bc5c9577-zdb97" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.470522  313008 pod_ready.go:83] waiting for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.474338  313008 pod_ready.go:94] pod "etcd-no-preload-220714" is "Ready"
	I1108 09:17:42.474362  313008 pod_ready.go:86] duration metric: took 3.818729ms for pod "etcd-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.476372  313008 pod_ready.go:83] waiting for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.480064  313008 pod_ready.go:94] pod "kube-apiserver-no-preload-220714" is "Ready"
	I1108 09:17:42.480092  313008 pod_ready.go:86] duration metric: took 3.702017ms for pod "kube-apiserver-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.481978  313008 pod_ready.go:83] waiting for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.667986  313008 pod_ready.go:94] pod "kube-controller-manager-no-preload-220714" is "Ready"
	I1108 09:17:42.668016  313008 pod_ready.go:86] duration metric: took 186.016263ms for pod "kube-controller-manager-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:42.866316  313008 pod_ready.go:83] waiting for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.266611  313008 pod_ready.go:94] pod "kube-proxy-66cm9" is "Ready"
	I1108 09:17:43.266646  313008 pod_ready.go:86] duration metric: took 400.304671ms for pod "kube-proxy-66cm9" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.465603  313008 pod_ready.go:83] waiting for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866064  313008 pod_ready.go:94] pod "kube-scheduler-no-preload-220714" is "Ready"
	I1108 09:17:43.866090  313008 pod_ready.go:86] duration metric: took 400.463165ms for pod "kube-scheduler-no-preload-220714" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.866101  313008 pod_ready.go:40] duration metric: took 35.96660519s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:43.912507  313008 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:43.914651  313008 out.go:179] * Done! kubectl is now configured to use "no-preload-220714" cluster and "default" namespace by default
	I1108 09:17:43.422936  312299 pod_ready.go:94] pod "coredns-66bc5c9577-cbw4j" is "Ready"
	I1108 09:17:43.422965  312299 pod_ready.go:86] duration metric: took 35.505880955s for pod "coredns-66bc5c9577-cbw4j" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.425909  312299 pod_ready.go:83] waiting for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.431928  312299 pod_ready.go:94] pod "etcd-embed-certs-271910" is "Ready"
	I1108 09:17:43.431954  312299 pod_ready.go:86] duration metric: took 6.020724ms for pod "etcd-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.434331  312299 pod_ready.go:83] waiting for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.438424  312299 pod_ready.go:94] pod "kube-apiserver-embed-certs-271910" is "Ready"
	I1108 09:17:43.438442  312299 pod_ready.go:86] duration metric: took 4.093369ms for pod "kube-apiserver-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.440478  312299 pod_ready.go:83] waiting for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.620323  312299 pod_ready.go:94] pod "kube-controller-manager-embed-certs-271910" is "Ready"
	I1108 09:17:43.620365  312299 pod_ready.go:86] duration metric: took 179.862516ms for pod "kube-controller-manager-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:43.820429  312299 pod_ready.go:83] waiting for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.221050  312299 pod_ready.go:94] pod "kube-proxy-lwbl6" is "Ready"
	I1108 09:17:44.221084  312299 pod_ready.go:86] duration metric: took 400.626058ms for pod "kube-proxy-lwbl6" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.421474  312299 pod_ready.go:83] waiting for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820796  312299 pod_ready.go:94] pod "kube-scheduler-embed-certs-271910" is "Ready"
	I1108 09:17:44.820825  312299 pod_ready.go:86] duration metric: took 399.325955ms for pod "kube-scheduler-embed-certs-271910" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:17:44.820836  312299 pod_ready.go:40] duration metric: took 36.908910218s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:17:44.864186  312299 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:17:44.865991  312299 out.go:179] * Done! kubectl is now configured to use "embed-certs-271910" cluster and "default" namespace by default
	W1108 09:17:44.577222  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:46.577391  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:48.578983  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:17:51.076640  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	
	
	==> CRI-O <==
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.409332885Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.413373099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.413407409Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.650856587Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5d629fd4-2163-4fc8-b4cb-f445890016de name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.652355591Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28d28628-7a4f-4e1a-b851-c12892214409 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.654217116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=54feda6c-053e-47b6-8e8e-b25a4f0436c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.654390784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.662227415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.666465562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.704573341Z" level=info msg="Created container ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=54feda6c-053e-47b6-8e8e-b25a4f0436c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.706690531Z" level=info msg="Starting container: ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca" id=ec8c5b43-1aa8-4058-8c7a-c74ccd40f6f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.709151983Z" level=info msg="Started container" PID=1734 containerID=ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper id=ec8c5b43-1aa8-4058-8c7a-c74ccd40f6f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6b67af985cb40c461b29a2ee908fbd579f6abf5808e2d8b79ffca3827a91ccb
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.792675866Z" level=info msg="Removing container: 92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921" id=78ee8eb4-b24f-4320-ae0c-fd01a7049452 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.805458851Z" level=info msg="Removed container 92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=78ee8eb4-b24f-4320-ae0c-fd01a7049452 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.798138946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2bdde1bc-87d2-4987-a739-197e5bc06b76 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.799022614Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e5656f44-a236-4c2c-86aa-e5dd5401ec8c name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.800099309Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f59012b6-0aae-4fd1-833d-d30023840e4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.80023317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806304749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806534297Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ba3afe0fdef76a93ebb50963258e65989bfaca1263574736a0dc4ecf8cb11a9e/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806563739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ba3afe0fdef76a93ebb50963258e65989bfaca1263574736a0dc4ecf8cb11a9e/merged/etc/group: no such file or directory"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806933556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.844021083Z" level=info msg="Created container f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390: kube-system/storage-provisioner/storage-provisioner" id=f59012b6-0aae-4fd1-833d-d30023840e4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.844729085Z" level=info msg="Starting container: f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390" id=bca0a265-0746-411b-b8dd-685e6ed429f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.848872877Z" level=info msg="Started container" PID=1748 containerID=f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390 description=kube-system/storage-provisioner/storage-provisioner id=bca0a265-0746-411b-b8dd-685e6ed429f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79e14807bfed3be086726dfafc6234b935ce6d41c7397edce5ffba989cee3f45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f452fc14bc8c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           19 seconds ago      Running             storage-provisioner         1                   79e14807bfed3       storage-provisioner                          kube-system
	ecfda84502ad3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           21 seconds ago      Exited              dashboard-metrics-scraper   2                   a6b67af985cb4       dashboard-metrics-scraper-6ffb444bf9-vff45   kubernetes-dashboard
	06b628afd654f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   40 seconds ago      Running             kubernetes-dashboard        0                   d167c8289ae2d       kubernetes-dashboard-855c9754f9-v2xrs        kubernetes-dashboard
	afa8047b11d67       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           50 seconds ago      Running             busybox                     1                   74bd10425b896       busybox                                      default
	72ca87c29be36       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           50 seconds ago      Running             coredns                     0                   99a21fcc4a235       coredns-66bc5c9577-zdb97                     kube-system
	d3e1079b945e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           50 seconds ago      Exited              storage-provisioner         0                   79e14807bfed3       storage-provisioner                          kube-system
	4e7e706219a20       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           50 seconds ago      Running             kindnet-cni                 0                   36978bd0a0523       kindnet-9sg4x                                kube-system
	aaae2304edacc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           50 seconds ago      Running             kube-proxy                  0                   4c9e786afa346       kube-proxy-66cm9                             kube-system
	fa188b4b7f4f2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           54 seconds ago      Running             kube-scheduler              0                   09b77e6832bc9       kube-scheduler-no-preload-220714             kube-system
	3d9b01a52911c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           54 seconds ago      Running             kube-apiserver              0                   4dba17f53353c       kube-apiserver-no-preload-220714             kube-system
	0148d2ce10ede       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           54 seconds ago      Running             kube-controller-manager     0                   14e76939efc91       kube-controller-manager-no-preload-220714    kube-system
	dd10a08245ba6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           54 seconds ago      Running             etcd                        0                   43e5c0021f201       etcd-no-preload-220714                       kube-system
	
	
	==> coredns [72ca87c29be3674615ea2310d4ce35a28bf6902372ccf3cbb64b6fe5342d5828] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50646 - 3894 "HINFO IN 5806252891480138078.6511606253546224025. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044699016s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-220714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-220714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-220714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-220714
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-220714
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a3fafd7f-70e4-4709-9069-846d0b2022cf
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         88s
	  kube-system                 coredns-66bc5c9577-zdb97                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     104s
	  kube-system                 etcd-no-preload-220714                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-9sg4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      105s
	  kube-system                 kube-apiserver-no-preload-220714              250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-no-preload-220714     200m (2%)     0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-proxy-66cm9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kube-system                 kube-scheduler-no-preload-220714              100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         103s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vff45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v2xrs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 103s                 kube-proxy       
	  Normal  Starting                 50s                  kube-proxy       
	  Normal  Starting                 115s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  115s (x8 over 115s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    115s (x8 over 115s)  kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     115s (x8 over 115s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    110s                 kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  110s                 kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     110s                 kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  Starting                 110s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           105s                 node-controller  Node no-preload-220714 event: Registered Node no-preload-220714 in Controller
	  Normal  NodeReady                91s                  kubelet          Node no-preload-220714 status is now: NodeReady
	  Normal  Starting                 55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  55s (x8 over 55s)    kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    55s (x8 over 55s)    kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     55s (x8 over 55s)    kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           48s                  node-controller  Node no-preload-220714 event: Registered Node no-preload-220714 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [dd10a08245ba675e73bc1f27d7645f3fd56047f90ecce3473b696401526ae0a3] <==
	{"level":"warn","ts":"2025-11-08T09:17:05.847895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.857231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.868048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.876071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.883936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.891806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.898667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.907521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.916534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.924014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.932394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.940076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.947216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.963238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.977025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.984428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.991690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.999607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.006671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.016333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.025059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.040992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.047784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.054167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.111598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:17:58 up  1:00,  0 user,  load average: 4.40, 4.01, 2.61
	Linux no-preload-220714 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e7e706219a205afe2dd065a3554c1ca7e78cbfdc9f409f62564c8b6003a136a] <==
	I1108 09:17:08.088253       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:08.182265       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:17:08.182497       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:08.182521       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:08.182550       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:08.389007       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:08.389027       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:08.389039       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:08.389185       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:17:08.881992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:17:08.882116       1 metrics.go:72] Registering metrics
	I1108 09:17:08.882249       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:18.389521       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:18.389595       1 main.go:301] handling current node
	I1108 09:17:28.389539       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:28.389567       1 main.go:301] handling current node
	I1108 09:17:38.389513       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:38.389573       1 main.go:301] handling current node
	I1108 09:17:48.389522       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:48.389583       1 main.go:301] handling current node
	I1108 09:17:58.397363       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:58.397401       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d9b01a52911c8c96e384a546564442fe2555748e1f47f6bb05707a71fd1044d] <==
	I1108 09:17:06.689604       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:17:06.690583       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:17:06.690761       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:17:06.691388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:17:06.690780       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:17:06.692509       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:06.692523       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:17:06.692532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:17:06.700515       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:06.707446       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:06.707476       1 policy_source.go:240] refreshing policies
	I1108 09:17:06.712376       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:17:06.748612       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:06.807228       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:07.065306       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:07.100974       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:07.124514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:07.137988       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:07.216680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.119.246"}
	I1108 09:17:07.234953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.150.28"}
	I1108 09:17:07.577745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:10.415637       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.415687       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.514546       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:17:10.563573       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0148d2ce10edecc0211c834c9a26268deafc67cddc66903beb3c4616c9e69ba2] <==
	I1108 09:17:09.999077       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:17:10.004353       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:17:10.010354       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:10.010449       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:10.010990       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:10.011019       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:17:10.010992       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:17:10.011015       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:17:10.012863       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:17:10.012941       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:17:10.013031       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-220714"
	I1108 09:17:10.013110       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:17:10.014099       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:17:10.018442       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:17:10.019923       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:10.022185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:10.022210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:10.022236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:10.022258       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:10.039941       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:17:10.040052       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:17:10.040099       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:17:10.040111       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:17:10.040132       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:17:10.044235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aaae2304edaccb39caba0aedbe6bbbc27ae6f9630f3040981499827f3ad62365] <==
	I1108 09:17:07.988440       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:08.058340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:08.158848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:08.158895       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:17:08.159017       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:08.181643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:08.181700       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:08.188540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:08.188951       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:08.189031       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:08.190792       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:08.190825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:08.190869       1 config.go:200] "Starting service config controller"
	I1108 09:17:08.190883       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:08.190950       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:08.190962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:08.190953       1 config.go:309] "Starting node config controller"
	I1108 09:17:08.190989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:08.190999       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:08.291179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:08.291187       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:08.291189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fa188b4b7f4f29c847e9cf3900671c80c7e9ffcd91d763bb30db0af0b6fd9ba0] <==
	I1108 09:17:04.811874       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:17:06.630266       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:17:06.630347       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:17:06.630361       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:17:06.630390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:17:06.696521       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:06.696570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:06.701217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.701325       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.703597       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:06.703642       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:06.802600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830439     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv89j\" (UniqueName: \"kubernetes.io/projected/4fa492e2-9880-45c3-ae24-c29ac5327451-kube-api-access-mv89j\") pod \"dashboard-metrics-scraper-6ffb444bf9-vff45\" (UID: \"4fa492e2-9880-45c3-ae24-c29ac5327451\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830513     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e9ce6e5-b160-47a2-a07c-4419790dd9e6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v2xrs\" (UID: \"1e9ce6e5-b160-47a2-a07c-4419790dd9e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830542     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4fa492e2-9880-45c3-ae24-c29ac5327451-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vff45\" (UID: \"4fa492e2-9880-45c3-ae24-c29ac5327451\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830567     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbnf\" (UniqueName: \"kubernetes.io/projected/1e9ce6e5-b160-47a2-a07c-4419790dd9e6-kube-api-access-pgbnf\") pod \"kubernetes-dashboard-855c9754f9-v2xrs\" (UID: \"1e9ce6e5-b160-47a2-a07c-4419790dd9e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs"
	Nov 08 09:17:11 no-preload-220714 kubelet[710]: I1108 09:17:11.961650     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:14 no-preload-220714 kubelet[710]: I1108 09:17:14.724250     710 scope.go:117] "RemoveContainer" containerID="eec36c0e743c48957b08ca20e4d1277ed20b4c6be681292db42e19639dcc7aa5"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: I1108 09:17:15.730544     710 scope.go:117] "RemoveContainer" containerID="eec36c0e743c48957b08ca20e4d1277ed20b4c6be681292db42e19639dcc7aa5"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: I1108 09:17:15.730785     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: E1108 09:17:15.730924     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:16 no-preload-220714 kubelet[710]: I1108 09:17:16.735090     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:16 no-preload-220714 kubelet[710]: E1108 09:17:16.735334     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:18 no-preload-220714 kubelet[710]: I1108 09:17:18.753502     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs" podStartSLOduration=1.9983163560000001 podStartE2EDuration="8.753478875s" podCreationTimestamp="2025-11-08 09:17:10 +0000 UTC" firstStartedPulling="2025-11-08 09:17:11.000112601 +0000 UTC m=+7.471285773" lastFinishedPulling="2025-11-08 09:17:17.755275149 +0000 UTC m=+14.226448292" observedRunningTime="2025-11-08 09:17:18.753429745 +0000 UTC m=+15.224602909" watchObservedRunningTime="2025-11-08 09:17:18.753478875 +0000 UTC m=+15.224652039"
	Nov 08 09:17:24 no-preload-220714 kubelet[710]: I1108 09:17:24.637026     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:24 no-preload-220714 kubelet[710]: E1108 09:17:24.637241     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.650226     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.789239     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.789550     710 scope.go:117] "RemoveContainer" containerID="ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: E1108 09:17:36.789755     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:38 no-preload-220714 kubelet[710]: I1108 09:17:38.797801     710 scope.go:117] "RemoveContainer" containerID="d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c"
	Nov 08 09:17:44 no-preload-220714 kubelet[710]: I1108 09:17:44.636356     710 scope.go:117] "RemoveContainer" containerID="ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	Nov 08 09:17:44 no-preload-220714 kubelet[710]: E1108 09:17:44.636574     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:56 no-preload-220714 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:56 no-preload-220714 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:56 no-preload-220714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:56 no-preload-220714 systemd[1]: kubelet.service: Consumed 1.730s CPU time.
	
	
	==> kubernetes-dashboard [06b628afd654f553c9e29b039e937c82ba40dcc65ce40079884e2c2dd706cfbf] <==
	2025/11/08 09:17:17 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:17 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:17 Using secret token for csrf signing
	2025/11/08 09:17:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:17 Generating JWE encryption key
	2025/11/08 09:17:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:17 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:17 Creating in-cluster Sidecar client
	2025/11/08 09:17:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:17 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:17 Starting overwatch
	
	
	==> storage-provisioner [d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c] <==
	I1108 09:17:07.956784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:37.959741       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390] <==
	I1108 09:17:38.864991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:38.875134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:38.875211       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:17:38.878057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:42.333103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:46.593127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:50.191119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:53.245623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.268387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.273471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:56.273658       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:56.273843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e!
	I1108 09:17:56.273838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43e594f5-edfa-4361-8eb4-8fe5628502f4", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e became leader
	W1108 09:17:56.276025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.280162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:56.374025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e!
	W1108 09:17:58.283671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:58.289913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220714 -n no-preload-220714: exit status 2 (394.936277ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-220714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
E1108 09:17:59.356062    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect no-preload-220714
helpers_test.go:243: (dbg) docker inspect no-preload-220714:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	        "Created": "2025-11-08T09:15:34.135970344Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 313279,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:57.204750329Z",
	            "FinishedAt": "2025-11-08T09:16:56.218398591Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hostname",
	        "HostsPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/hosts",
	        "LogPath": "/var/lib/docker/containers/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d/446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d-json.log",
	        "Name": "/no-preload-220714",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "no-preload-220714:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "no-preload-220714",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "446e9eda1361b683679862bdd87aff0d5d8e47a6698a4b787659683b40c1b58d",
	                "LowerDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/merged",
	                "UpperDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/diff",
	                "WorkDir": "/var/lib/docker/overlay2/fdb5b61330f92e0f9f4e97501587c5e6ffe2a5c541bede6849cfe01bfd800187/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "no-preload-220714",
	                "Source": "/var/lib/docker/volumes/no-preload-220714/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "no-preload-220714",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "no-preload-220714",
	                "name.minikube.sigs.k8s.io": "no-preload-220714",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5f0b6b36b3f9af9f5510613c5bcde5880c452f9f30b9841f6fb92a0a0ff403bf",
	            "SandboxKey": "/var/run/docker/netns/5f0b6b36b3f9",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33119"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33120"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33123"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33121"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33122"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "no-preload-220714": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.94.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "7a:c6:c7:d6:96:7c",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "d2c6206fd83352e5892c70867654eb8c3127b66df1d3abb8d7e06c7e601cea52",
	                    "EndpointID": "d60f6e2cd683b473df7bdfb84b75c5edf9cd71ce9b2e213e7a180e57261d02cb",
	                    "Gateway": "192.168.94.1",
	                    "IPAddress": "192.168.94.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "no-preload-220714",
	                        "446e9eda1361"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714: exit status 2 (373.943989ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/no-preload/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/no-preload/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-220714 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p no-preload-220714 logs -n 25: (1.237069454s)
helpers_test.go:260: TestStartStop/group/no-preload/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.409332885Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.413373099Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:17:18 no-preload-220714 crio[566]: time="2025-11-08T09:17:18.413407409Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.650856587Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=5d629fd4-2163-4fc8-b4cb-f445890016de name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.652355591Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=28d28628-7a4f-4e1a-b851-c12892214409 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.654217116Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=54feda6c-053e-47b6-8e8e-b25a4f0436c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.654390784Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.662227415Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.666465562Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.704573341Z" level=info msg="Created container ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=54feda6c-053e-47b6-8e8e-b25a4f0436c8 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.706690531Z" level=info msg="Starting container: ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca" id=ec8c5b43-1aa8-4058-8c7a-c74ccd40f6f1 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.709151983Z" level=info msg="Started container" PID=1734 containerID=ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper id=ec8c5b43-1aa8-4058-8c7a-c74ccd40f6f1 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a6b67af985cb40c461b29a2ee908fbd579f6abf5808e2d8b79ffca3827a91ccb
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.792675866Z" level=info msg="Removing container: 92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921" id=78ee8eb4-b24f-4320-ae0c-fd01a7049452 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:36 no-preload-220714 crio[566]: time="2025-11-08T09:17:36.805458851Z" level=info msg="Removed container 92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45/dashboard-metrics-scraper" id=78ee8eb4-b24f-4320-ae0c-fd01a7049452 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.798138946Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=2bdde1bc-87d2-4987-a739-197e5bc06b76 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.799022614Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=e5656f44-a236-4c2c-86aa-e5dd5401ec8c name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.800099309Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=f59012b6-0aae-4fd1-833d-d30023840e4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.80023317Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806304749Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806534297Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/ba3afe0fdef76a93ebb50963258e65989bfaca1263574736a0dc4ecf8cb11a9e/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806563739Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/ba3afe0fdef76a93ebb50963258e65989bfaca1263574736a0dc4ecf8cb11a9e/merged/etc/group: no such file or directory"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.806933556Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.844021083Z" level=info msg="Created container f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390: kube-system/storage-provisioner/storage-provisioner" id=f59012b6-0aae-4fd1-833d-d30023840e4f name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.844729085Z" level=info msg="Starting container: f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390" id=bca0a265-0746-411b-b8dd-685e6ed429f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:38 no-preload-220714 crio[566]: time="2025-11-08T09:17:38.848872877Z" level=info msg="Started container" PID=1748 containerID=f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390 description=kube-system/storage-provisioner/storage-provisioner id=bca0a265-0746-411b-b8dd-685e6ed429f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=79e14807bfed3be086726dfafc6234b935ce6d41c7397edce5ffba989cee3f45
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	f452fc14bc8c7       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   79e14807bfed3       storage-provisioner                          kube-system
	ecfda84502ad3       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           23 seconds ago      Exited              dashboard-metrics-scraper   2                   a6b67af985cb4       dashboard-metrics-scraper-6ffb444bf9-vff45   kubernetes-dashboard
	06b628afd654f       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   42 seconds ago      Running             kubernetes-dashboard        0                   d167c8289ae2d       kubernetes-dashboard-855c9754f9-v2xrs        kubernetes-dashboard
	afa8047b11d67       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   74bd10425b896       busybox                                      default
	72ca87c29be36       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   99a21fcc4a235       coredns-66bc5c9577-zdb97                     kube-system
	d3e1079b945e3       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   79e14807bfed3       storage-provisioner                          kube-system
	4e7e706219a20       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   36978bd0a0523       kindnet-9sg4x                                kube-system
	aaae2304edacc       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   4c9e786afa346       kube-proxy-66cm9                             kube-system
	fa188b4b7f4f2       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   09b77e6832bc9       kube-scheduler-no-preload-220714             kube-system
	3d9b01a52911c       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   4dba17f53353c       kube-apiserver-no-preload-220714             kube-system
	0148d2ce10ede       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   14e76939efc91       kube-controller-manager-no-preload-220714    kube-system
	dd10a08245ba6       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   43e5c0021f201       etcd-no-preload-220714                       kube-system
	
	
	==> coredns [72ca87c29be3674615ea2310d4ce35a28bf6902372ccf3cbb64b6fe5342d5828] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = c7556d8fdf49c5e32a9077be8cfb9fc6947bb07e663a10d55b192eb63ad1f2bd9793e8e5f5a36fc9abb1957831eec5c997fd9821790e3990ae9531bf41ecea37
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:50646 - 3894 "HINFO IN 5806252891480138078.6511606253546224025. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.044699016s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               no-preload-220714
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-220714
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=no-preload-220714
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_09_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-220714
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:37 +0000   Sat, 08 Nov 2025 09:16:27 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.94.2
	  Hostname:    no-preload-220714
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                a3fafd7f-70e4-4709-9069-846d0b2022cf
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         90s
	  kube-system                 coredns-66bc5c9577-zdb97                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-no-preload-220714                        100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         113s
	  kube-system                 kindnet-9sg4x                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      107s
	  kube-system                 kube-apiserver-no-preload-220714              250m (3%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 kube-controller-manager-no-preload-220714     200m (2%)     0 (0%)      0 (0%)           0 (0%)         112s
	  kube-system                 kube-proxy-66cm9                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	  kube-system                 kube-scheduler-no-preload-220714              100m (1%)     0 (0%)      0 (0%)           0 (0%)         113s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         105s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-vff45    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-v2xrs         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 105s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 117s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 117s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 117s)  kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 117s)  kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    112s                 kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  112s                 kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s                 kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  Starting                 112s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node no-preload-220714 event: Registered Node no-preload-220714 in Controller
	  Normal  NodeReady                93s                  kubelet          Node no-preload-220714 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node no-preload-220714 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node no-preload-220714 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node no-preload-220714 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           50s                  node-controller  Node no-preload-220714 event: Registered Node no-preload-220714 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [dd10a08245ba675e73bc1f27d7645f3fd56047f90ecce3473b696401526ae0a3] <==
	{"level":"warn","ts":"2025-11-08T09:17:05.847895Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53372","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.857231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53418","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.868048Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53464","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.876071Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53494","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.883936Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53522","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.891806Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.898667Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53586","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.907521Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53600","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.916534Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53624","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.924014Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53658","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.932394Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53692","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.940076Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53724","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.947216Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53778","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.963238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53838","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.977025Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53894","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.984428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53938","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.991690Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53976","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.999607Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54020","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.006671Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54060","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.016333Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54076","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.025059Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54128","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.040992Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54190","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.047784Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54198","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.054167Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54234","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.111598Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54254","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:00 up  1:00,  0 user,  load average: 4.45, 4.02, 2.63
	Linux no-preload-220714 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [4e7e706219a205afe2dd065a3554c1ca7e78cbfdc9f409f62564c8b6003a136a] <==
	I1108 09:17:08.088253       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:08.182265       1 main.go:139] hostIP = 192.168.94.2
	podIP = 192.168.94.2
	I1108 09:17:08.182497       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:08.182521       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:08.182550       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:08.389007       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:08.389027       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:08.389039       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:08.389185       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:17:08.881992       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:17:08.882116       1 metrics.go:72] Registering metrics
	I1108 09:17:08.882249       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:18.389521       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:18.389595       1 main.go:301] handling current node
	I1108 09:17:28.389539       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:28.389567       1 main.go:301] handling current node
	I1108 09:17:38.389513       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:38.389573       1 main.go:301] handling current node
	I1108 09:17:48.389522       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:48.389583       1 main.go:301] handling current node
	I1108 09:17:58.397363       1 main.go:297] Handling node with IPs: map[192.168.94.2:{}]
	I1108 09:17:58.397401       1 main.go:301] handling current node
	
	
	==> kube-apiserver [3d9b01a52911c8c96e384a546564442fe2555748e1f47f6bb05707a71fd1044d] <==
	I1108 09:17:06.689604       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:17:06.690583       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:17:06.690761       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:17:06.691388       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:17:06.690780       1 shared_informer.go:356] "Caches are synced" controller="configmaps"
	I1108 09:17:06.692509       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:06.692523       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:17:06.692532       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:17:06.700515       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:06.707446       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:06.707476       1 policy_source.go:240] refreshing policies
	I1108 09:17:06.712376       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:17:06.748612       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:06.807228       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:07.065306       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:07.100974       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:07.124514       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:07.137988       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:07.216680       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.96.119.246"}
	I1108 09:17:07.234953       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.104.150.28"}
	I1108 09:17:07.577745       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:10.415637       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.415687       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.514546       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:17:10.563573       1 controller.go:667] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0148d2ce10edecc0211c834c9a26268deafc67cddc66903beb3c4616c9e69ba2] <==
	I1108 09:17:09.999077       1 shared_informer.go:356] "Caches are synced" controller="stateful set"
	I1108 09:17:10.004353       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:17:10.010354       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:10.010449       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:10.010990       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:10.011019       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:17:10.010992       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:17:10.011015       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:17:10.012863       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:17:10.012941       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:17:10.013031       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="no-preload-220714"
	I1108 09:17:10.013110       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:17:10.014099       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:17:10.018442       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:17:10.019923       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:10.022185       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:10.022210       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:10.022236       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:10.022258       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:10.039941       1 shared_informer.go:356] "Caches are synced" controller="node"
	I1108 09:17:10.040052       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I1108 09:17:10.040099       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1108 09:17:10.040111       1 shared_informer.go:349] "Waiting for caches to sync" controller="cidrallocator"
	I1108 09:17:10.040132       1 shared_informer.go:356] "Caches are synced" controller="cidrallocator"
	I1108 09:17:10.044235       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [aaae2304edaccb39caba0aedbe6bbbc27ae6f9630f3040981499827f3ad62365] <==
	I1108 09:17:07.988440       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:08.058340       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:08.158848       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:08.158895       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.94.2"]
	E1108 09:17:08.159017       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:08.181643       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:08.181700       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:08.188540       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:08.188951       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:08.189031       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:08.190792       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:08.190825       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:08.190869       1 config.go:200] "Starting service config controller"
	I1108 09:17:08.190883       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:08.190950       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:08.190962       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:08.190953       1 config.go:309] "Starting node config controller"
	I1108 09:17:08.190989       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:08.190999       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:08.291179       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:08.291187       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:08.291189       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [fa188b4b7f4f29c847e9cf3900671c80c7e9ffcd91d763bb30db0af0b6fd9ba0] <==
	I1108 09:17:04.811874       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:17:06.630266       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:17:06.630347       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:17:06.630361       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:17:06.630390       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:17:06.696521       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:06.696570       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:06.701217       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.701325       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.703597       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:06.703642       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:06.802600       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830439     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mv89j\" (UniqueName: \"kubernetes.io/projected/4fa492e2-9880-45c3-ae24-c29ac5327451-kube-api-access-mv89j\") pod \"dashboard-metrics-scraper-6ffb444bf9-vff45\" (UID: \"4fa492e2-9880-45c3-ae24-c29ac5327451\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830513     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/1e9ce6e5-b160-47a2-a07c-4419790dd9e6-tmp-volume\") pod \"kubernetes-dashboard-855c9754f9-v2xrs\" (UID: \"1e9ce6e5-b160-47a2-a07c-4419790dd9e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830542     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/4fa492e2-9880-45c3-ae24-c29ac5327451-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-vff45\" (UID: \"4fa492e2-9880-45c3-ae24-c29ac5327451\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45"
	Nov 08 09:17:10 no-preload-220714 kubelet[710]: I1108 09:17:10.830567     710 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgbnf\" (UniqueName: \"kubernetes.io/projected/1e9ce6e5-b160-47a2-a07c-4419790dd9e6-kube-api-access-pgbnf\") pod \"kubernetes-dashboard-855c9754f9-v2xrs\" (UID: \"1e9ce6e5-b160-47a2-a07c-4419790dd9e6\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs"
	Nov 08 09:17:11 no-preload-220714 kubelet[710]: I1108 09:17:11.961650     710 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:14 no-preload-220714 kubelet[710]: I1108 09:17:14.724250     710 scope.go:117] "RemoveContainer" containerID="eec36c0e743c48957b08ca20e4d1277ed20b4c6be681292db42e19639dcc7aa5"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: I1108 09:17:15.730544     710 scope.go:117] "RemoveContainer" containerID="eec36c0e743c48957b08ca20e4d1277ed20b4c6be681292db42e19639dcc7aa5"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: I1108 09:17:15.730785     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:15 no-preload-220714 kubelet[710]: E1108 09:17:15.730924     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:16 no-preload-220714 kubelet[710]: I1108 09:17:16.735090     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:16 no-preload-220714 kubelet[710]: E1108 09:17:16.735334     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:18 no-preload-220714 kubelet[710]: I1108 09:17:18.753502     710 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-v2xrs" podStartSLOduration=1.9983163560000001 podStartE2EDuration="8.753478875s" podCreationTimestamp="2025-11-08 09:17:10 +0000 UTC" firstStartedPulling="2025-11-08 09:17:11.000112601 +0000 UTC m=+7.471285773" lastFinishedPulling="2025-11-08 09:17:17.755275149 +0000 UTC m=+14.226448292" observedRunningTime="2025-11-08 09:17:18.753429745 +0000 UTC m=+15.224602909" watchObservedRunningTime="2025-11-08 09:17:18.753478875 +0000 UTC m=+15.224652039"
	Nov 08 09:17:24 no-preload-220714 kubelet[710]: I1108 09:17:24.637026     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:24 no-preload-220714 kubelet[710]: E1108 09:17:24.637241     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.650226     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.789239     710 scope.go:117] "RemoveContainer" containerID="92790c24fb280cb936f5506ed95b8aaaf984da0803700369bfa8fb1f34193921"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: I1108 09:17:36.789550     710 scope.go:117] "RemoveContainer" containerID="ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	Nov 08 09:17:36 no-preload-220714 kubelet[710]: E1108 09:17:36.789755     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:38 no-preload-220714 kubelet[710]: I1108 09:17:38.797801     710 scope.go:117] "RemoveContainer" containerID="d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c"
	Nov 08 09:17:44 no-preload-220714 kubelet[710]: I1108 09:17:44.636356     710 scope.go:117] "RemoveContainer" containerID="ecfda84502ad343ffd004e743b562d682933921091b9cd8fb5837061719a6eca"
	Nov 08 09:17:44 no-preload-220714 kubelet[710]: E1108 09:17:44.636574     710 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-vff45_kubernetes-dashboard(4fa492e2-9880-45c3-ae24-c29ac5327451)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-vff45" podUID="4fa492e2-9880-45c3-ae24-c29ac5327451"
	Nov 08 09:17:56 no-preload-220714 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:56 no-preload-220714 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:56 no-preload-220714 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:56 no-preload-220714 systemd[1]: kubelet.service: Consumed 1.730s CPU time.
	
	
	==> kubernetes-dashboard [06b628afd654f553c9e29b039e937c82ba40dcc65ce40079884e2c2dd706cfbf] <==
	2025/11/08 09:17:17 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:17 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:17 Using secret token for csrf signing
	2025/11/08 09:17:17 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:17 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:17 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:17 Generating JWE encryption key
	2025/11/08 09:17:17 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:17 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:17 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:17 Creating in-cluster Sidecar client
	2025/11/08 09:17:17 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:17 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:47 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:17 Starting overwatch
	
	
	==> storage-provisioner [d3e1079b945e3b3d2e2d3501999318f43f76bdabe58b616e7d3cfbb3b084df5c] <==
	I1108 09:17:07.956784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:37.959741       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [f452fc14bc8c79ed9ad72273f1284efd6351e9f0a98226756967b8f159d46390] <==
	I1108 09:17:38.864991       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:38.875134       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:38.875211       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:17:38.878057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:42.333103       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:46.593127       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:50.191119       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:53.245623       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.268387       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.273471       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:56.273658       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:56.273843       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e!
	I1108 09:17:56.273838       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"43e594f5-edfa-4361-8eb4-8fe5628502f4", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e became leader
	W1108 09:17:56.276025       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:56.280162       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:56.374025       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-220714_440ad86f-06eb-473d-9c97-7df2a0acf36e!
	W1108 09:17:58.283671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:58.289913       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:00.293671       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:00.298057       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220714 -n no-preload-220714
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-220714 -n no-preload-220714: exit status 2 (363.725611ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context no-preload-220714 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/no-preload/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/no-preload/serial/Pause (5.91s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (8.52s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-271910 --alsologtostderr -v=1
E1108 09:17:56.952560    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p embed-certs-271910 --alsologtostderr -v=1: exit status 80 (2.49271835s)

                                                
                                                
-- stdout --
	* Pausing node embed-certs-271910 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:17:56.587666  324200 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:56.587962  324200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:56.587974  324200 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:56.587978  324200 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:56.588172  324200 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:56.588433  324200 out.go:368] Setting JSON to false
	I1108 09:17:56.588479  324200 mustload.go:66] Loading cluster: embed-certs-271910
	I1108 09:17:56.588821  324200 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:56.589240  324200 cli_runner.go:164] Run: docker container inspect embed-certs-271910 --format={{.State.Status}}
	I1108 09:17:56.607788  324200 host.go:66] Checking if "embed-certs-271910" exists ...
	I1108 09:17:56.608123  324200 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:56.668082  324200 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:77 OomKillDisable:false NGoroutines:85 SystemTime:2025-11-08 09:17:56.656420845 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:56.668769  324200 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:embed-certs-271910 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true
) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:17:56.670794  324200 out.go:179] * Pausing node embed-certs-271910 ... 
	I1108 09:17:56.671929  324200 host.go:66] Checking if "embed-certs-271910" exists ...
	I1108 09:17:56.672193  324200 ssh_runner.go:195] Run: systemctl --version
	I1108 09:17:56.672233  324200 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" embed-certs-271910
	I1108 09:17:56.690829  324200 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33114 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/embed-certs-271910/id_rsa Username:docker}
	I1108 09:17:56.787581  324200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:56.812994  324200 pause.go:52] kubelet running: true
	I1108 09:17:56.813056  324200 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:56.980918  324200 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:56.980997  324200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:57.057601  324200 cri.go:89] found id: "ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d"
	I1108 09:17:57.057641  324200 cri.go:89] found id: "0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf"
	I1108 09:17:57.057648  324200 cri.go:89] found id: "b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e"
	I1108 09:17:57.057653  324200 cri.go:89] found id: "c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	I1108 09:17:57.057657  324200 cri.go:89] found id: "e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b"
	I1108 09:17:57.057670  324200 cri.go:89] found id: "8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0"
	I1108 09:17:57.057679  324200 cri.go:89] found id: "5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518"
	I1108 09:17:57.057684  324200 cri.go:89] found id: "28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a"
	I1108 09:17:57.057691  324200 cri.go:89] found id: "4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb"
	I1108 09:17:57.057700  324200 cri.go:89] found id: "5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	I1108 09:17:57.057706  324200 cri.go:89] found id: "ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2"
	I1108 09:17:57.057708  324200 cri.go:89] found id: ""
	I1108 09:17:57.057743  324200 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:57.071641  324200 retry.go:31] will retry after 140.658515ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:57Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:57.213073  324200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:57.228551  324200 pause.go:52] kubelet running: false
	I1108 09:17:57.228611  324200 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:57.403133  324200 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:57.403213  324200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:57.483441  324200 cri.go:89] found id: "ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d"
	I1108 09:17:57.483470  324200 cri.go:89] found id: "0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf"
	I1108 09:17:57.483476  324200 cri.go:89] found id: "b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e"
	I1108 09:17:57.483480  324200 cri.go:89] found id: "c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	I1108 09:17:57.483484  324200 cri.go:89] found id: "e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b"
	I1108 09:17:57.483488  324200 cri.go:89] found id: "8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0"
	I1108 09:17:57.483492  324200 cri.go:89] found id: "5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518"
	I1108 09:17:57.483497  324200 cri.go:89] found id: "28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a"
	I1108 09:17:57.483501  324200 cri.go:89] found id: "4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb"
	I1108 09:17:57.483522  324200 cri.go:89] found id: "5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	I1108 09:17:57.483532  324200 cri.go:89] found id: "ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2"
	I1108 09:17:57.483535  324200 cri.go:89] found id: ""
	I1108 09:17:57.483592  324200 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:57.495117  324200 retry.go:31] will retry after 403.441224ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:57Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:57.899477  324200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:57.913695  324200 pause.go:52] kubelet running: false
	I1108 09:17:57.913773  324200 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:58.082032  324200 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:58.082129  324200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:58.160358  324200 cri.go:89] found id: "ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d"
	I1108 09:17:58.160378  324200 cri.go:89] found id: "0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf"
	I1108 09:17:58.160384  324200 cri.go:89] found id: "b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e"
	I1108 09:17:58.160389  324200 cri.go:89] found id: "c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	I1108 09:17:58.160394  324200 cri.go:89] found id: "e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b"
	I1108 09:17:58.160399  324200 cri.go:89] found id: "8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0"
	I1108 09:17:58.160403  324200 cri.go:89] found id: "5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518"
	I1108 09:17:58.160407  324200 cri.go:89] found id: "28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a"
	I1108 09:17:58.160411  324200 cri.go:89] found id: "4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb"
	I1108 09:17:58.160419  324200 cri.go:89] found id: "5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	I1108 09:17:58.160423  324200 cri.go:89] found id: "ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2"
	I1108 09:17:58.160427  324200 cri.go:89] found id: ""
	I1108 09:17:58.160466  324200 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:58.173537  324200 retry.go:31] will retry after 534.270279ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:58Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:17:58.708819  324200 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:17:58.723770  324200 pause.go:52] kubelet running: false
	I1108 09:17:58.723828  324200 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:17:58.900660  324200 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:17:58.900769  324200 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:17:58.985612  324200 cri.go:89] found id: "ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d"
	I1108 09:17:58.985632  324200 cri.go:89] found id: "0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf"
	I1108 09:17:58.985636  324200 cri.go:89] found id: "b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e"
	I1108 09:17:58.985639  324200 cri.go:89] found id: "c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	I1108 09:17:58.985641  324200 cri.go:89] found id: "e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b"
	I1108 09:17:58.985644  324200 cri.go:89] found id: "8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0"
	I1108 09:17:58.985647  324200 cri.go:89] found id: "5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518"
	I1108 09:17:58.985649  324200 cri.go:89] found id: "28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a"
	I1108 09:17:58.985652  324200 cri.go:89] found id: "4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb"
	I1108 09:17:58.985657  324200 cri.go:89] found id: "5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	I1108 09:17:58.985660  324200 cri.go:89] found id: "ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2"
	I1108 09:17:58.985662  324200 cri.go:89] found id: ""
	I1108 09:17:58.985706  324200 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:17:59.005711  324200 out.go:203] 
	W1108 09:17:59.007232  324200 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:58Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:17:58Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:17:59.007254  324200 out.go:285] * 
	* 
	W1108 09:17:59.014113  324200 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:17:59.017510  324200 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p embed-certs-271910 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-271910
helpers_test.go:243: (dbg) docker inspect embed-certs-271910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	        "Created": "2025-11-08T09:15:51.304431445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:56.378966527Z",
	            "FinishedAt": "2025-11-08T09:16:55.321378639Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hosts",
	        "LogPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb-json.log",
	        "Name": "/embed-certs-271910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-271910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-271910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	                "LowerDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-271910",
	                "Source": "/var/lib/docker/volumes/embed-certs-271910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-271910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-271910",
	                "name.minikube.sigs.k8s.io": "embed-certs-271910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5774cc80bd7383b08db5a44820c7328e57bfc4fa4a620bb2348fc425c35505a9",
	            "SandboxKey": "/var/run/docker/netns/5774cc80bd73",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-271910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:30:44:8a:97:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea0d0f62e0b24d7b6e90e97450bb9bf7e3ead1e018cb014ae7285578554a529e",
	                    "EndpointID": "85ecf9d6b63cca6a725ab74e632407d21ad313b34474da310dade0dd8f06fe86",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-271910",
	                        "1bcde2187397"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910: exit status 2 (386.003751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25: (1.275774873s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable metrics-server -p old-k8s-version-339286 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                  │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 08 09:17:28 embed-certs-271910 crio[559]: time="2025-11-08T09:17:28.853985007Z" level=info msg="Started container" PID=1742 containerID=ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper id=81957730-090e-40bf-9965-60a49bed5a4d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c55bc29dede8824efeff6cfd8cc47bc255887e1d4b52141f730e95944223e552
	Nov 08 09:17:29 embed-certs-271910 crio[559]: time="2025-11-08T09:17:29.326973712Z" level=info msg="Removing container: 7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88" id=8114ac21-8e0a-4ad8-9b8e-db88fa1790a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:29 embed-certs-271910 crio[559]: time="2025-11-08T09:17:29.339804685Z" level=info msg="Removed container 7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=8114ac21-8e0a-4ad8-9b8e-db88fa1790a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.355651847Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a91e3a12-8ddd-4cd3-958f-58348ce3e66a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.356688861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c23f74c8-e17d-4699-b982-b169ec96fc28 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.357927237Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2332e51f-f39f-4359-ac9e-8662b345f605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.358107062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.3628475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.3630516Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b884fff967f45c17a080b05ad5e6259d04a371ac09ba4f081d4cd8d1f1514b80/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.363088277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b884fff967f45c17a080b05ad5e6259d04a371ac09ba4f081d4cd8d1f1514b80/merged/etc/group: no such file or directory"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.363401546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.399931332Z" level=info msg="Created container ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d: kube-system/storage-provisioner/storage-provisioner" id=2332e51f-f39f-4359-ac9e-8662b345f605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.400700634Z" level=info msg="Starting container: ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d" id=ab7d8d12-ca9b-490d-baf6-8701746b03ef name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.402842984Z" level=info msg="Started container" PID=1756 containerID=ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d description=kube-system/storage-provisioner/storage-provisioner id=ab7d8d12-ca9b-490d-baf6-8701746b03ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad6293468ddecd6811b8247212e43506c0bd03a87e6ee598942b3534f0d845a0
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.204810652Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ac4fb889-722a-448c-a651-bbd4b80cc98a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.20607158Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=35ee5da5-ec13-46fe-b383-18bfff9fd632 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.207353947Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=4fd3dd73-297f-44e5-866d-c51d288911a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.207489392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.214752461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.215419598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.243877146Z" level=info msg="Created container 5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=4fd3dd73-297f-44e5-866d-c51d288911a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.244652522Z" level=info msg="Starting container: 5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a" id=e6477389-5efc-4cbe-b001-2818b04be5d4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.246887849Z" level=info msg="Started container" PID=1790 containerID=5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper id=e6477389-5efc-4cbe-b001-2818b04be5d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c55bc29dede8824efeff6cfd8cc47bc255887e1d4b52141f730e95944223e552
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.399601562Z" level=info msg="Removing container: ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a" id=6f0ad807-c890-48cb-b390-0f4e9c9cff5f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.411064112Z" level=info msg="Removed container ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=6f0ad807-c890-48cb-b390-0f4e9c9cff5f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5449d7527f410       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           6 seconds ago       Exited              dashboard-metrics-scraper   3                   c55bc29dede88       dashboard-metrics-scraper-6ffb444bf9-n8dq9   kubernetes-dashboard
	ae778bb315748       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           21 seconds ago      Running             storage-provisioner         1                   ad6293468ddec       storage-provisioner                          kube-system
	ab178b9598b87       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   45 seconds ago      Running             kubernetes-dashboard        0                   05d82794bb9d3       kubernetes-dashboard-855c9754f9-7gzf8        kubernetes-dashboard
	e0919752d4be3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           52 seconds ago      Running             busybox                     1                   effc33f82d446       busybox                                      default
	0597e3b576f43       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           52 seconds ago      Running             coredns                     0                   a0738b3265f75       coredns-66bc5c9577-cbw4j                     kube-system
	b9c6ba8e5353e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           52 seconds ago      Running             kube-proxy                  0                   5e46589963462       kube-proxy-lwbl6                             kube-system
	c74d93e81aff5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           52 seconds ago      Exited              storage-provisioner         0                   ad6293468ddec       storage-provisioner                          kube-system
	e37558e304fb2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           52 seconds ago      Running             kindnet-cni                 0                   ba80dd78a1e13       kindnet-49l78                                kube-system
	8d8a79e509dd4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           56 seconds ago      Running             kube-scheduler              0                   d5edadc8f4136       kube-scheduler-embed-certs-271910            kube-system
	5352f39b8b074       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           56 seconds ago      Running             etcd                        0                   2f92f505aff4e       etcd-embed-certs-271910                      kube-system
	28d99c06b77fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           56 seconds ago      Running             kube-controller-manager     0                   8b506d5399f74       kube-controller-manager-embed-certs-271910   kube-system
	4f37080f84679       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           56 seconds ago      Running             kube-apiserver              0                   ebe3fe55f2dbf       kube-apiserver-embed-certs-271910            kube-system
	
	
	==> coredns [0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49036 - 61782 "HINFO IN 8992124418978496161.4245422833279390252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051447768s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-271910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-271910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-271910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-271910
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-271910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5a4dbec0-6466-4d25-92b6-8bbd4bdc538c
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         91s
	  kube-system                 coredns-66bc5c9577-cbw4j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     106s
	  kube-system                 etcd-embed-certs-271910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         111s
	  kube-system                 kindnet-49l78                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      106s
	  kube-system                 kube-apiserver-embed-certs-271910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-controller-manager-embed-certs-271910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 kube-proxy-lwbl6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kube-system                 kube-scheduler-embed-certs-271910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         111s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         106s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8dq9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7gzf8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         50s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 104s                 kube-proxy       
	  Normal  Starting                 52s                  kube-proxy       
	  Normal  Starting                 118s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  117s (x8 over 118s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    117s (x8 over 118s)  kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     117s (x8 over 118s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s                 kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  111s                 kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s                 kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           107s                 node-controller  Node embed-certs-271910 event: Registered Node embed-certs-271910 in Controller
	  Normal  NodeReady                95s                  kubelet          Node embed-certs-271910 status is now: NodeReady
	  Normal  Starting                 57s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  57s (x8 over 57s)    kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    57s (x8 over 57s)    kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     57s (x8 over 57s)    kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           51s                  node-controller  Node embed-certs-271910 event: Registered Node embed-certs-271910 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518] <==
	{"level":"warn","ts":"2025-11-08T09:17:05.843117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.855848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.867567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.875043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.882242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.891031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.898549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.909080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.918481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.927472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.935069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.942689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.949630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.957595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.967377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.974976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.981774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.988616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.002594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.003910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.012379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.024238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.030969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.038224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.104055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54238","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:00 up  1:00,  0 user,  load average: 4.40, 4.01, 2.61
	Linux embed-certs-271910 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b] <==
	I1108 09:17:07.743158       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:07.822021       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:17:07.822223       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:07.822248       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:07.822276       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:08.026104       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:08.026155       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:08.026171       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:08.026379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:17:08.327063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:17:08.327094       1 metrics.go:72] Registering metrics
	I1108 09:17:08.327161       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:18.026128       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:18.026199       1 main.go:301] handling current node
	I1108 09:17:28.026395       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:28.026432       1 main.go:301] handling current node
	I1108 09:17:38.026565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:38.026600       1 main.go:301] handling current node
	I1108 09:17:48.026349       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:48.026398       1 main.go:301] handling current node
	I1108 09:17:58.029536       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:58.029587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb] <==
	I1108 09:17:06.636500       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:17:06.636606       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:06.637053       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:17:06.637103       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:17:06.638680       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 09:17:06.638731       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:17:06.638756       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:17:06.638765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:17:06.638773       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:17:06.643630       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:06.643800       1 policy_source.go:240] refreshing policies
	I1108 09:17:06.655594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:06.661895       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:07.081363       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:07.117219       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:07.153397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:07.171742       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:07.188577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:07.262148       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.131.182"}
	I1108 09:17:07.289979       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.179.54"}
	I1108 09:17:07.539479       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:10.025331       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.426857       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:10.426933       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:10.527304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a] <==
	I1108 09:17:09.970795       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:17:09.972130       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:17:09.972170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:17:09.972183       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:17:09.972257       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:17:09.972273       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:17:09.972299       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:17:09.972317       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:17:09.972318       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:09.973434       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:17:09.977024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:09.977538       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:17:09.978324       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:09.979382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:09.979393       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:09.980534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:09.980634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:09.983876       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:17:09.987224       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:17:09.989516       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:09.991094       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:17:09.999265       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:17:09.999292       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:17:09.999302       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:17:10.005713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e] <==
	I1108 09:17:07.635080       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:07.704832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:07.805505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:07.805546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:17:07.805625       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:07.828552       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:07.828603       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:07.835090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:07.835579       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:07.835606       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:07.837185       1 config.go:200] "Starting service config controller"
	I1108 09:17:07.837205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:07.837375       1 config.go:309] "Starting node config controller"
	I1108 09:17:07.837393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:07.837401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:07.837572       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:07.837588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:07.837605       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:07.837610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:07.937489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:07.938197       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:07.938259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0] <==
	I1108 09:17:04.821412       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:17:06.569823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:17:06.569880       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:17:06.569893       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:17:06.569903       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:17:06.674526       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:06.674560       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:06.678895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.678980       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.680025       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:06.680118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:06.779331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:10 embed-certs-271910 kubelet[715]: I1108 09:17:10.684239     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/95e18aaa-eef7-4785-bafe-319d88d78fbe-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8dq9\" (UID: \"95e18aaa-eef7-4785-bafe-319d88d78fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9"
	Nov 08 09:17:10 embed-certs-271910 kubelet[715]: I1108 09:17:10.684430     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8cfs\" (UniqueName: \"kubernetes.io/projected/95e18aaa-eef7-4785-bafe-319d88d78fbe-kube-api-access-p8cfs\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8dq9\" (UID: \"95e18aaa-eef7-4785-bafe-319d88d78fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9"
	Nov 08 09:17:13 embed-certs-271910 kubelet[715]: I1108 09:17:13.127221     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:18 embed-certs-271910 kubelet[715]: I1108 09:17:18.289915     715 scope.go:117] "RemoveContainer" containerID="ed1d2b6ec29d468e1afe6fb0b20c4b2fce1c3ada26a8d7e5e1b6adb39c40763f"
	Nov 08 09:17:18 embed-certs-271910 kubelet[715]: I1108 09:17:18.301662     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7gzf8" podStartSLOduration=4.261061702 podStartE2EDuration="8.301637384s" podCreationTimestamp="2025-11-08 09:17:10 +0000 UTC" firstStartedPulling="2025-11-08 09:17:10.942564619 +0000 UTC m=+7.853525394" lastFinishedPulling="2025-11-08 09:17:14.983140309 +0000 UTC m=+11.894101076" observedRunningTime="2025-11-08 09:17:15.301030872 +0000 UTC m=+12.211991656" watchObservedRunningTime="2025-11-08 09:17:18.301637384 +0000 UTC m=+15.212598168"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: I1108 09:17:19.294406     715 scope.go:117] "RemoveContainer" containerID="ed1d2b6ec29d468e1afe6fb0b20c4b2fce1c3ada26a8d7e5e1b6adb39c40763f"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: I1108 09:17:19.294514     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: E1108 09:17:19.294710     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:20 embed-certs-271910 kubelet[715]: I1108 09:17:20.300580     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:20 embed-certs-271910 kubelet[715]: E1108 09:17:20.300767     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:28 embed-certs-271910 kubelet[715]: I1108 09:17:28.806420     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: I1108 09:17:29.325658     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: I1108 09:17:29.325909     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: E1108 09:17:29.326160     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: I1108 09:17:38.355201     715 scope.go:117] "RemoveContainer" containerID="c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: I1108 09:17:38.806191     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: E1108 09:17:38.806447     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.204270     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.398082     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.398342     715 scope.go:117] "RemoveContainer" containerID="5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: E1108 09:17:53.398559     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2] <==
	2025/11/08 09:17:15 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:15 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:15 Using secret token for csrf signing
	2025/11/08 09:17:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:15 Generating JWE encryption key
	2025/11/08 09:17:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:15 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:15 Creating in-cluster Sidecar client
	2025/11/08 09:17:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:15 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:15 Starting overwatch
	
	
	==> storage-provisioner [ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d] <==
	I1108 09:17:38.415784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:38.425088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:38.425150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:17:38.427480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:41.882484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:46.142767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:49.740489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:52.794231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.816853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.822010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:55.822171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:55.822245       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e00d8485-fd2f-4aef-b7f8-239d96fe73e5", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0 became leader
	I1108 09:17:55.822365       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0!
	W1108 09:17:55.824140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.827936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:55.923375       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0!
	W1108 09:17:57.830965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:57.841037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:59.845334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:59.849894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704] <==
	I1108 09:17:07.599352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:37.605760       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271910 -n embed-certs-271910: exit status 2 (366.665078ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-271910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect embed-certs-271910
helpers_test.go:243: (dbg) docker inspect embed-certs-271910:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	        "Created": "2025-11-08T09:15:51.304431445Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 312634,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:16:56.378966527Z",
	            "FinishedAt": "2025-11-08T09:16:55.321378639Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hostname",
	        "HostsPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/hosts",
	        "LogPath": "/var/lib/docker/containers/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb/1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb-json.log",
	        "Name": "/embed-certs-271910",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "embed-certs-271910:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "embed-certs-271910",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1bcde2187397bcc81207ae9803eed5a174ad7f9ac455868226cc06ccd574eedb",
	                "LowerDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/merged",
	                "UpperDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/diff",
	                "WorkDir": "/var/lib/docker/overlay2/56c89b44b1b26a1f1f80fa3f83326c1f8da197f426ce64d03d546adbf5b4f03e/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "embed-certs-271910",
	                "Source": "/var/lib/docker/volumes/embed-certs-271910/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "embed-certs-271910",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "embed-certs-271910",
	                "name.minikube.sigs.k8s.io": "embed-certs-271910",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "5774cc80bd7383b08db5a44820c7328e57bfc4fa4a620bb2348fc425c35505a9",
	            "SandboxKey": "/var/run/docker/netns/5774cc80bd73",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33114"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33115"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33118"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33116"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33117"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "embed-certs-271910": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "16:30:44:8a:97:f8",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "ea0d0f62e0b24d7b6e90e97450bb9bf7e3ead1e018cb014ae7285578554a529e",
	                    "EndpointID": "85ecf9d6b63cca6a725ab74e632407d21ad313b34474da310dade0dd8f06fe86",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "embed-certs-271910",
	                        "1bcde2187397"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910: exit status 2 (352.70674ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/embed-certs/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-271910 logs -n 25: (3.06679117s)
helpers_test.go:260: TestStartStop/group/embed-certs/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ stop    │ -p old-k8s-version-339286 --alsologtostderr -v=3                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable metrics-server -p embed-certs-271910 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                      │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ addons  │ enable metrics-server -p no-preload-220714 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │                     │
	│ stop    │ -p embed-certs-271910 --alsologtostderr -v=3                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ stop    │ -p no-preload-220714 --alsologtostderr -v=3                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ addons  │ enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                             │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	
	
	==> CRI-O <==
	Nov 08 09:17:28 embed-certs-271910 crio[559]: time="2025-11-08T09:17:28.853985007Z" level=info msg="Started container" PID=1742 containerID=ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper id=81957730-090e-40bf-9965-60a49bed5a4d name=/runtime.v1.RuntimeService/StartContainer sandboxID=c55bc29dede8824efeff6cfd8cc47bc255887e1d4b52141f730e95944223e552
	Nov 08 09:17:29 embed-certs-271910 crio[559]: time="2025-11-08T09:17:29.326973712Z" level=info msg="Removing container: 7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88" id=8114ac21-8e0a-4ad8-9b8e-db88fa1790a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:29 embed-certs-271910 crio[559]: time="2025-11-08T09:17:29.339804685Z" level=info msg="Removed container 7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=8114ac21-8e0a-4ad8-9b8e-db88fa1790a7 name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.355651847Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=a91e3a12-8ddd-4cd3-958f-58348ce3e66a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.356688861Z" level=info msg="Checking image status: gcr.io/k8s-minikube/storage-provisioner:v5" id=c23f74c8-e17d-4699-b982-b169ec96fc28 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.357927237Z" level=info msg="Creating container: kube-system/storage-provisioner/storage-provisioner" id=2332e51f-f39f-4359-ac9e-8662b345f605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.358107062Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.3628475Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.3630516Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/b884fff967f45c17a080b05ad5e6259d04a371ac09ba4f081d4cd8d1f1514b80/merged/etc/passwd: no such file or directory"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.363088277Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/b884fff967f45c17a080b05ad5e6259d04a371ac09ba4f081d4cd8d1f1514b80/merged/etc/group: no such file or directory"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.363401546Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.399931332Z" level=info msg="Created container ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d: kube-system/storage-provisioner/storage-provisioner" id=2332e51f-f39f-4359-ac9e-8662b345f605 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.400700634Z" level=info msg="Starting container: ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d" id=ab7d8d12-ca9b-490d-baf6-8701746b03ef name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:38 embed-certs-271910 crio[559]: time="2025-11-08T09:17:38.402842984Z" level=info msg="Started container" PID=1756 containerID=ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d description=kube-system/storage-provisioner/storage-provisioner id=ab7d8d12-ca9b-490d-baf6-8701746b03ef name=/runtime.v1.RuntimeService/StartContainer sandboxID=ad6293468ddecd6811b8247212e43506c0bd03a87e6ee598942b3534f0d845a0
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.204810652Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=ac4fb889-722a-448c-a651-bbd4b80cc98a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.20607158Z" level=info msg="Checking image status: registry.k8s.io/echoserver:1.4" id=35ee5da5-ec13-46fe-b383-18bfff9fd632 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.207353947Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=4fd3dd73-297f-44e5-866d-c51d288911a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.207489392Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.214752461Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.215419598Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.243877146Z" level=info msg="Created container 5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=4fd3dd73-297f-44e5-866d-c51d288911a7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.244652522Z" level=info msg="Starting container: 5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a" id=e6477389-5efc-4cbe-b001-2818b04be5d4 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.246887849Z" level=info msg="Started container" PID=1790 containerID=5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a description=kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper id=e6477389-5efc-4cbe-b001-2818b04be5d4 name=/runtime.v1.RuntimeService/StartContainer sandboxID=c55bc29dede8824efeff6cfd8cc47bc255887e1d4b52141f730e95944223e552
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.399601562Z" level=info msg="Removing container: ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a" id=6f0ad807-c890-48cb-b390-0f4e9c9cff5f name=/runtime.v1.RuntimeService/RemoveContainer
	Nov 08 09:17:53 embed-certs-271910 crio[559]: time="2025-11-08T09:17:53.411064112Z" level=info msg="Removed container ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a: kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9/dashboard-metrics-scraper" id=6f0ad807-c890-48cb-b390-0f4e9c9cff5f name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                          NAMESPACE
	5449d7527f410       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           9 seconds ago       Exited              dashboard-metrics-scraper   3                   c55bc29dede88       dashboard-metrics-scraper-6ffb444bf9-n8dq9   kubernetes-dashboard
	ae778bb315748       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   ad6293468ddec       storage-provisioner                          kube-system
	ab178b9598b87       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   47 seconds ago      Running             kubernetes-dashboard        0                   05d82794bb9d3       kubernetes-dashboard-855c9754f9-7gzf8        kubernetes-dashboard
	e0919752d4be3       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           54 seconds ago      Running             busybox                     1                   effc33f82d446       busybox                                      default
	0597e3b576f43       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   a0738b3265f75       coredns-66bc5c9577-cbw4j                     kube-system
	b9c6ba8e5353e       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   5e46589963462       kube-proxy-lwbl6                             kube-system
	c74d93e81aff5       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   ad6293468ddec       storage-provisioner                          kube-system
	e37558e304fb2       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   ba80dd78a1e13       kindnet-49l78                                kube-system
	8d8a79e509dd4       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   d5edadc8f4136       kube-scheduler-embed-certs-271910            kube-system
	5352f39b8b074       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   2f92f505aff4e       etcd-embed-certs-271910                      kube-system
	28d99c06b77fd       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   8b506d5399f74       kube-controller-manager-embed-certs-271910   kube-system
	4f37080f84679       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   ebe3fe55f2dbf       kube-apiserver-embed-certs-271910            kube-system
	
	
	==> coredns [0597e3b576f435691740f61be89086552e310efa0315ec99646bfc30810071bf] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = fa9a0cdcdddcb4be74a0eaf7cfcb211c40e29ddf5507e03bbfc0065bade31f0f2641a2513136e246f32328dd126fc93236fb5c595246f0763926a524386705e8
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:49036 - 61782 "HINFO IN 8992124418978496161.4245422833279390252. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.051447768s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               embed-certs-271910
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-271910
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=embed-certs-271910
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_10_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-271910
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:17:47 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:17:36 +0000   Sat, 08 Nov 2025 09:16:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.85.2
	  Hostname:    embed-certs-271910
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                5a4dbec0-6466-4d25-92b6-8bbd4bdc538c
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-cbw4j                      100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     109s
	  kube-system                 etcd-embed-certs-271910                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-49l78                                 100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      109s
	  kube-system                 kube-apiserver-embed-certs-271910             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-embed-certs-271910    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-lwbl6                              0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kube-system                 kube-scheduler-embed-certs-271910             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         109s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-n8dq9    0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-7gzf8         0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 107s               kube-proxy       
	  Normal  Starting                 55s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m1s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m1s)  kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m1s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s               kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s               kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s               kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           110s               node-controller  Node embed-certs-271910 event: Registered Node embed-certs-271910 in Controller
	  Normal  NodeReady                98s                kubelet          Node embed-certs-271910 status is now: NodeReady
	  Normal  Starting                 60s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  60s (x8 over 60s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    60s (x8 over 60s)  kubelet          Node embed-certs-271910 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     60s (x8 over 60s)  kubelet          Node embed-certs-271910 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           54s                node-controller  Node embed-certs-271910 event: Registered Node embed-certs-271910 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [5352f39b8b0747bd132936689a6fa5d2a11d72a6afa0c8818f848dde4c1d4518] <==
	{"level":"warn","ts":"2025-11-08T09:17:05.843117Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.855848Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53390","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.867567Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53436","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.875043Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.882242Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.891031Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53538","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.898549Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.909080Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53604","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.918481Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53644","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.927472Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53670","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.935069Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53706","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.942689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53760","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.949630Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53792","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.957595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53822","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.967377Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53854","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.974976Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53880","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.981774Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53914","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:05.988616Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53962","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.002594Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:53990","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.003910Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54050","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.012379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54066","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.024238Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54104","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.030969Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54142","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.038224Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54166","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:06.104055Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54238","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:03 up  1:00,  0 user,  load average: 4.45, 4.02, 2.63
	Linux embed-certs-271910 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [e37558e304fb251666501b3637ba5549bfeccc93f01f6e1c91e358882125958b] <==
	I1108 09:17:07.743158       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:07.822021       1 main.go:139] hostIP = 192.168.85.2
	podIP = 192.168.85.2
	I1108 09:17:07.822223       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:07.822248       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:07.822276       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:08Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:08.026104       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:08.026155       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:08.026171       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:08.026379       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:17:08.327063       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:17:08.327094       1 metrics.go:72] Registering metrics
	I1108 09:17:08.327161       1 controller.go:711] "Syncing nftables rules"
	I1108 09:17:18.026128       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:18.026199       1 main.go:301] handling current node
	I1108 09:17:28.026395       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:28.026432       1 main.go:301] handling current node
	I1108 09:17:38.026565       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:38.026600       1 main.go:301] handling current node
	I1108 09:17:48.026349       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:48.026398       1 main.go:301] handling current node
	I1108 09:17:58.029536       1 main.go:297] Handling node with IPs: map[192.168.85.2:{}]
	I1108 09:17:58.029587       1 main.go:301] handling current node
	
	
	==> kube-apiserver [4f37080f84679928c7dc97f8694d0e579a6d7c07580dea2acc938012181f50eb] <==
	I1108 09:17:06.636500       1 shared_informer.go:356] "Caches are synced" controller="cluster_authentication_trust_controller"
	I1108 09:17:06.636606       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:06.637053       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:17:06.637103       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:17:06.638680       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 09:17:06.638731       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:17:06.638756       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:17:06.638765       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:17:06.638773       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:17:06.643630       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:06.643800       1 policy_source.go:240] refreshing policies
	I1108 09:17:06.655594       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:06.661895       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:07.081363       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:07.117219       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:07.153397       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:07.171742       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:07.188577       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:07.262148       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.99.131.182"}
	I1108 09:17:07.289979       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.179.54"}
	I1108 09:17:07.539479       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:10.025331       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:17:10.426857       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:10.426933       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:10.527304       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [28d99c06b77fd13cee308f9a7f12ec7206f945a0776417e2c8d1311a8243960a] <==
	I1108 09:17:09.970795       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:17:09.972130       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:17:09.972170       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:17:09.972183       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:17:09.972257       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:17:09.972273       1 shared_informer.go:356] "Caches are synced" controller="TTL"
	I1108 09:17:09.972299       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:17:09.972317       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:17:09.972318       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:09.973434       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:17:09.977024       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:09.977538       1 shared_informer.go:356] "Caches are synced" controller="namespace"
	I1108 09:17:09.978324       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:09.979382       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:09.979393       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:09.980534       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:09.980634       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:09.983876       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:17:09.987224       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:17:09.989516       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:09.991094       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:17:09.999265       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	I1108 09:17:09.999292       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I1108 09:17:09.999302       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:17:10.005713       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [b9c6ba8e5353efb41278987aa4a581d742ba1a712f87a0d09f312cbf79324e9e] <==
	I1108 09:17:07.635080       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:07.704832       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:07.805505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:07.805546       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.85.2"]
	E1108 09:17:07.805625       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:07.828552       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:07.828603       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:07.835090       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:07.835579       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:07.835606       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:07.837185       1 config.go:200] "Starting service config controller"
	I1108 09:17:07.837205       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:07.837375       1 config.go:309] "Starting node config controller"
	I1108 09:17:07.837393       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:07.837401       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:07.837572       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:07.837588       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:07.837605       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:07.837610       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:07.937489       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:07.938197       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:07.938259       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [8d8a79e509dd4ac3a34fd3cce48948ec1b9b67925d91b0ee3bddd3b4b0e06eb0] <==
	I1108 09:17:04.821412       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:17:06.569823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:17:06.569880       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:17:06.569893       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:17:06.569903       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:17:06.674526       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:06.674560       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:06.678895       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.678980       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:06.680025       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:06.680118       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:06.779331       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:10 embed-certs-271910 kubelet[715]: I1108 09:17:10.684239     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/95e18aaa-eef7-4785-bafe-319d88d78fbe-tmp-volume\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8dq9\" (UID: \"95e18aaa-eef7-4785-bafe-319d88d78fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9"
	Nov 08 09:17:10 embed-certs-271910 kubelet[715]: I1108 09:17:10.684430     715 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8cfs\" (UniqueName: \"kubernetes.io/projected/95e18aaa-eef7-4785-bafe-319d88d78fbe-kube-api-access-p8cfs\") pod \"dashboard-metrics-scraper-6ffb444bf9-n8dq9\" (UID: \"95e18aaa-eef7-4785-bafe-319d88d78fbe\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9"
	Nov 08 09:17:13 embed-certs-271910 kubelet[715]: I1108 09:17:13.127221     715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:18 embed-certs-271910 kubelet[715]: I1108 09:17:18.289915     715 scope.go:117] "RemoveContainer" containerID="ed1d2b6ec29d468e1afe6fb0b20c4b2fce1c3ada26a8d7e5e1b6adb39c40763f"
	Nov 08 09:17:18 embed-certs-271910 kubelet[715]: I1108 09:17:18.301662     715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-7gzf8" podStartSLOduration=4.261061702 podStartE2EDuration="8.301637384s" podCreationTimestamp="2025-11-08 09:17:10 +0000 UTC" firstStartedPulling="2025-11-08 09:17:10.942564619 +0000 UTC m=+7.853525394" lastFinishedPulling="2025-11-08 09:17:14.983140309 +0000 UTC m=+11.894101076" observedRunningTime="2025-11-08 09:17:15.301030872 +0000 UTC m=+12.211991656" watchObservedRunningTime="2025-11-08 09:17:18.301637384 +0000 UTC m=+15.212598168"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: I1108 09:17:19.294406     715 scope.go:117] "RemoveContainer" containerID="ed1d2b6ec29d468e1afe6fb0b20c4b2fce1c3ada26a8d7e5e1b6adb39c40763f"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: I1108 09:17:19.294514     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:19 embed-certs-271910 kubelet[715]: E1108 09:17:19.294710     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:20 embed-certs-271910 kubelet[715]: I1108 09:17:20.300580     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:20 embed-certs-271910 kubelet[715]: E1108 09:17:20.300767     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:28 embed-certs-271910 kubelet[715]: I1108 09:17:28.806420     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: I1108 09:17:29.325658     715 scope.go:117] "RemoveContainer" containerID="7412e3aa77c97fc8db54d7bf3ac8732048a24f52738f28d7159aaff7b220ed88"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: I1108 09:17:29.325909     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:29 embed-certs-271910 kubelet[715]: E1108 09:17:29.326160     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: I1108 09:17:38.355201     715 scope.go:117] "RemoveContainer" containerID="c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: I1108 09:17:38.806191     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:38 embed-certs-271910 kubelet[715]: E1108 09:17:38.806447     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.204270     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.398082     715 scope.go:117] "RemoveContainer" containerID="ac35393a31f773e7d0cbc9579f724f5a25f022f0734a000f50e595f58587e61a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: I1108 09:17:53.398342     715 scope.go:117] "RemoveContainer" containerID="5449d7527f410ae60d39874bd488d1455bcdb6ea192211f42f6b555c3c1e2b2a"
	Nov 08 09:17:53 embed-certs-271910 kubelet[715]: E1108 09:17:53.398559     715 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-n8dq9_kubernetes-dashboard(95e18aaa-eef7-4785-bafe-319d88d78fbe)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-n8dq9" podUID="95e18aaa-eef7-4785-bafe-319d88d78fbe"
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:17:56 embed-certs-271910 systemd[1]: kubelet.service: Consumed 1.754s CPU time.
	
	
	==> kubernetes-dashboard [ab178b9598b87b0a383b4725b6e758db53de46c4d43fe98360a285a76cf0bcc2] <==
	2025/11/08 09:17:15 Starting overwatch
	2025/11/08 09:17:15 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:15 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:15 Using secret token for csrf signing
	2025/11/08 09:17:15 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:15 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:15 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:15 Generating JWE encryption key
	2025/11/08 09:17:15 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:15 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:15 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:15 Creating in-cluster Sidecar client
	2025/11/08 09:17:15 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:15 Serving insecurely on HTTP port: 9090
	2025/11/08 09:17:45 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [ae778bb315748f1dce47d81dc587db089fa529d7d16d32255e40a51372d21c7d] <==
	I1108 09:17:38.415784       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:17:38.425088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:17:38.425150       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:17:38.427480       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:41.882484       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:46.142767       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:49.740489       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:52.794231       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.816853       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.822010       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:55.822171       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:17:55.822245       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"e00d8485-fd2f-4aef-b7f8-239d96fe73e5", APIVersion:"v1", ResourceVersion:"668", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0 became leader
	I1108 09:17:55.822365       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0!
	W1108 09:17:55.824140       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:55.827936       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:17:55.923375       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-271910_02c0869f-bb07-47dd-a8e8-2e557886e3e0!
	W1108 09:17:57.830965       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:57.841037       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:59.845334       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:17:59.849894       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:01.853958       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:01.858904       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:03.862399       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:03.965086       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	
	
	==> storage-provisioner [c74d93e81aff5ad27c1ed47d2107913cfdee1cd3c3edf7430976c7446cc8f704] <==
	I1108 09:17:07.599352       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:17:37.605760       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271910 -n embed-certs-271910
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-271910 -n embed-certs-271910: exit status 2 (367.691704ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context embed-certs-271910 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/embed-certs/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/embed-certs/serial/Pause (8.52s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (6.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-677902 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p default-k8s-diff-port-677902 --alsologtostderr -v=1: exit status 80 (2.40214502s)

                                                
                                                
-- stdout --
	* Pausing node default-k8s-diff-port-677902 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:18:24.732229  330947 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:24.732526  330947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:24.732535  330947 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:24.732539  330947 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:24.732723  330947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:18:24.732933  330947 out.go:368] Setting JSON to false
	I1108 09:18:24.732976  330947 mustload.go:66] Loading cluster: default-k8s-diff-port-677902
	I1108 09:18:24.733303  330947 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:24.733687  330947 cli_runner.go:164] Run: docker container inspect default-k8s-diff-port-677902 --format={{.State.Status}}
	I1108 09:18:24.752190  330947 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:18:24.752497  330947 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:24.822708  330947 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:66 SystemTime:2025-11-08 09:18:24.809836245 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:24.823531  330947 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:default-k8s-diff-port-677902 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s
(bool=true) wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:18:24.828616  330947 out.go:179] * Pausing node default-k8s-diff-port-677902 ... 
	I1108 09:18:24.830053  330947 host.go:66] Checking if "default-k8s-diff-port-677902" exists ...
	I1108 09:18:24.830401  330947 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:24.830441  330947 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" default-k8s-diff-port-677902
	I1108 09:18:24.848892  330947 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33124 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/default-k8s-diff-port-677902/id_rsa Username:docker}
	I1108 09:18:24.941005  330947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:24.962772  330947 pause.go:52] kubelet running: true
	I1108 09:18:24.962864  330947 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:25.147371  330947 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:25.147484  330947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:25.215662  330947 cri.go:89] found id: "ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e"
	I1108 09:18:25.215684  330947 cri.go:89] found id: "bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73"
	I1108 09:18:25.215690  330947 cri.go:89] found id: "336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882"
	I1108 09:18:25.215694  330947 cri.go:89] found id: "fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	I1108 09:18:25.215698  330947 cri.go:89] found id: "590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c"
	I1108 09:18:25.215702  330947 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:18:25.215706  330947 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:18:25.215710  330947 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:18:25.215713  330947 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:18:25.215720  330947 cri.go:89] found id: "d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	I1108 09:18:25.215723  330947 cri.go:89] found id: "16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd"
	I1108 09:18:25.215727  330947 cri.go:89] found id: ""
	I1108 09:18:25.215782  330947 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:25.227584  330947 retry.go:31] will retry after 177.50083ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:25Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:25.406004  330947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:25.418599  330947 pause.go:52] kubelet running: false
	I1108 09:18:25.418644  330947 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:25.557554  330947 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:25.557636  330947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:25.622526  330947 cri.go:89] found id: "ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e"
	I1108 09:18:25.622553  330947 cri.go:89] found id: "bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73"
	I1108 09:18:25.622559  330947 cri.go:89] found id: "336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882"
	I1108 09:18:25.622564  330947 cri.go:89] found id: "fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	I1108 09:18:25.622569  330947 cri.go:89] found id: "590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c"
	I1108 09:18:25.622574  330947 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:18:25.622578  330947 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:18:25.622583  330947 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:18:25.622586  330947 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:18:25.622606  330947 cri.go:89] found id: "d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	I1108 09:18:25.622609  330947 cri.go:89] found id: "16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd"
	I1108 09:18:25.622612  330947 cri.go:89] found id: ""
	I1108 09:18:25.622652  330947 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:25.634609  330947 retry.go:31] will retry after 483.539795ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:25Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:26.118310  330947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:26.131159  330947 pause.go:52] kubelet running: false
	I1108 09:18:26.131208  330947 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:26.292006  330947 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:26.292067  330947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:26.376918  330947 cri.go:89] found id: "ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e"
	I1108 09:18:26.376945  330947 cri.go:89] found id: "bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73"
	I1108 09:18:26.376950  330947 cri.go:89] found id: "336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882"
	I1108 09:18:26.376955  330947 cri.go:89] found id: "fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	I1108 09:18:26.376959  330947 cri.go:89] found id: "590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c"
	I1108 09:18:26.376963  330947 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:18:26.376967  330947 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:18:26.376972  330947 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:18:26.376975  330947 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:18:26.376983  330947 cri.go:89] found id: "d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	I1108 09:18:26.376987  330947 cri.go:89] found id: "16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd"
	I1108 09:18:26.376999  330947 cri.go:89] found id: ""
	I1108 09:18:26.377078  330947 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:26.395483  330947 retry.go:31] will retry after 433.308896ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:26Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:26.828977  330947 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:26.842199  330947 pause.go:52] kubelet running: false
	I1108 09:18:26.842258  330947 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:26.987444  330947 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:26.987523  330947 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:27.055074  330947 cri.go:89] found id: "ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e"
	I1108 09:18:27.055097  330947 cri.go:89] found id: "bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73"
	I1108 09:18:27.055102  330947 cri.go:89] found id: "336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882"
	I1108 09:18:27.055106  330947 cri.go:89] found id: "fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	I1108 09:18:27.055110  330947 cri.go:89] found id: "590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c"
	I1108 09:18:27.055115  330947 cri.go:89] found id: "8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc"
	I1108 09:18:27.055118  330947 cri.go:89] found id: "3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242"
	I1108 09:18:27.055122  330947 cri.go:89] found id: "31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3"
	I1108 09:18:27.055126  330947 cri.go:89] found id: "88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1"
	I1108 09:18:27.055133  330947 cri.go:89] found id: "d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	I1108 09:18:27.055136  330947 cri.go:89] found id: "16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd"
	I1108 09:18:27.055150  330947 cri.go:89] found id: ""
	I1108 09:18:27.055197  330947 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:27.069347  330947 out.go:203] 
	W1108 09:18:27.070625  330947 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:18:27.070640  330947 out.go:285] * 
	* 
	W1108 09:18:27.074651  330947 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:18:27.076003  330947 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p default-k8s-diff-port-677902 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-677902
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-677902:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	        "Created": "2025-11-08T09:16:20.668171946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:17:23.250821933Z",
	            "FinishedAt": "2025-11-08T09:17:21.411042413Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hosts",
	        "LogPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2-json.log",
	        "Name": "/default-k8s-diff-port-677902",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-677902:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-677902",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	                "LowerDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-677902",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-677902/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-677902",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca84e8084a3047063b58262c0027eaf231551809613138b072a50a58760f050",
	            "SandboxKey": "/var/run/docker/netns/0ca84e8084a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-677902": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:1c:29:c9:91:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3530cc966e776b586ccf4d2edbdd1f526df4bef1d7edd4ef4684fbf79284383f",
	                    "EndpointID": "f5ee254f6dee0fa1b88220e46eb514e1cb885e9fd1251762c7d324501893de50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-677902",
	                        "1e7d7f902c4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902: exit status 2 (343.825873ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25: (1.162665727s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-677902 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-677902 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:18:00.076917  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:02.690159  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:58.696492  325211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:17:58.696765  325211 start.go:159] libmachine.API.Create for "newest-cni-620528" (driver="docker")
	I1108 09:17:58.696803  325211 client.go:173] LocalClient.Create starting
	I1108 09:17:58.696917  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:17:58.696958  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.696982  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697061  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:17:58.697100  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.697116  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697562  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:17:58.717266  325211 cli_runner.go:211] docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:17:58.717347  325211 network_create.go:284] running [docker network inspect newest-cni-620528] to gather additional debugging logs...
	I1108 09:17:58.717379  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528
	W1108 09:17:58.736456  325211 cli_runner.go:211] docker network inspect newest-cni-620528 returned with exit code 1
	I1108 09:17:58.736492  325211 network_create.go:287] error running [docker network inspect newest-cni-620528]: docker network inspect newest-cni-620528: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-620528 not found
	I1108 09:17:58.736508  325211 network_create.go:289] output of [docker network inspect newest-cni-620528]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-620528 not found
	
	** /stderr **
	I1108 09:17:58.736599  325211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:58.758028  325211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:17:58.758799  325211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:17:58.759757  325211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:17:58.760782  325211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3530cc966e77 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:ab:9a:62:0b:ef} reservation:<nil>}
	I1108 09:17:58.761727  325211 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ea0d0f62e0b2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:91:c3:f9:f2:45} reservation:<nil>}
	I1108 09:17:58.762519  325211 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d2c6206fd833 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:72:29:08:bd:5d} reservation:<nil>}
	I1108 09:17:58.764114  325211 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8c0d0}
	I1108 09:17:58.764142  325211 network_create.go:124] attempt to create docker network newest-cni-620528 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:17:58.764193  325211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-620528 newest-cni-620528
	I1108 09:17:58.832507  325211 network_create.go:108] docker network newest-cni-620528 192.168.103.0/24 created
	I1108 09:17:58.832544  325211 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-620528" container
	I1108 09:17:58.832610  325211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:17:58.853554  325211 cli_runner.go:164] Run: docker volume create newest-cni-620528 --label name.minikube.sigs.k8s.io=newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:17:58.877252  325211 oci.go:103] Successfully created a docker volume newest-cni-620528
	I1108 09:17:58.877433  325211 cli_runner.go:164] Run: docker run --rm --name newest-cni-620528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --entrypoint /usr/bin/test -v newest-cni-620528:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:17:59.367458  325211 oci.go:107] Successfully prepared a docker volume newest-cni-620528
	I1108 09:17:59.367498  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:59.367522  325211 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:17:59.367593  325211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:18:05.076934  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:07.078212  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:04.272478  325211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.904840042s)
	I1108 09:18:04.272514  325211 kic.go:203] duration metric: took 4.90498935s to extract preloaded images to volume ...
	W1108 09:18:04.272612  325211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:18:04.272742  325211 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:18:04.272940  325211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:18:04.343948  325211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-620528 --name newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-620528 --network newest-cni-620528 --ip 192.168.103.2 --volume newest-cni-620528:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:18:04.742474  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Running}}
	I1108 09:18:04.764312  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:04.784485  325211 cli_runner.go:164] Run: docker exec newest-cni-620528 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:18:04.838693  325211 oci.go:144] the created container "newest-cni-620528" has a running status.
	I1108 09:18:04.838725  325211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa...
	I1108 09:18:05.369787  325211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:18:05.457128  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.479326  325211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:18:05.479354  325211 kic_runner.go:114] Args: [docker exec --privileged newest-cni-620528 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:18:05.539352  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.562723  325211 machine.go:94] provisionDockerMachine start ...
	I1108 09:18:05.562853  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.583585  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.583921  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.583937  325211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:18:05.727446  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.727474  325211 ubuntu.go:182] provisioning hostname "newest-cni-620528"
	I1108 09:18:05.727531  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.746860  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.747202  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.747227  325211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-620528 && echo "newest-cni-620528" | sudo tee /etc/hostname
	I1108 09:18:05.888726  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.888814  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.908669  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.908892  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.908930  325211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-620528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-620528/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-620528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:18:06.037040  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:18:06.037068  325211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:18:06.037142  325211 ubuntu.go:190] setting up certificates
	I1108 09:18:06.037152  325211 provision.go:84] configureAuth start
	I1108 09:18:06.037215  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:06.055504  325211 provision.go:143] copyHostCerts
	I1108 09:18:06.055556  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:18:06.055570  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:18:06.055648  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:18:06.055756  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:18:06.055768  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:18:06.055809  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:18:06.055888  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:18:06.055898  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:18:06.055933  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:18:06.056003  325211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-620528 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-620528]
	I1108 09:18:06.537976  325211 provision.go:177] copyRemoteCerts
	I1108 09:18:06.538036  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:18:06.538071  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.557256  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:06.654533  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:18:06.676656  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:18:06.695147  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:18:06.716798  325211 provision.go:87] duration metric: took 679.62911ms to configureAuth
	I1108 09:18:06.716829  325211 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:18:06.717067  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:06.717198  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.738275  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:06.738563  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:06.738581  325211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:18:06.981160  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:18:06.981185  325211 machine.go:97] duration metric: took 1.418436634s to provisionDockerMachine
	I1108 09:18:06.981197  325211 client.go:176] duration metric: took 8.28438328s to LocalClient.Create
	I1108 09:18:06.981213  325211 start.go:167] duration metric: took 8.284449883s to libmachine.API.Create "newest-cni-620528"
	I1108 09:18:06.981223  325211 start.go:293] postStartSetup for "newest-cni-620528" (driver="docker")
	I1108 09:18:06.981235  325211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:18:06.981314  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:18:06.981372  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.002647  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.105621  325211 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:18:07.109460  325211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:18:07.109484  325211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:18:07.109499  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:18:07.109560  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:18:07.109672  325211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:18:07.109799  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:18:07.117996  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:07.140135  325211 start.go:296] duration metric: took 158.897937ms for postStartSetup
	I1108 09:18:07.140513  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.161877  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:07.162158  325211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:18:07.162210  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.180553  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.271941  325211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:18:07.276948  325211 start.go:128] duration metric: took 8.583931143s to createHost
	I1108 09:18:07.276971  325211 start.go:83] releasing machines lock for "newest-cni-620528", held for 8.584057332s
	I1108 09:18:07.277031  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.295640  325211 ssh_runner.go:195] Run: cat /version.json
	I1108 09:18:07.295700  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.295708  325211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:18:07.295767  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.316331  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.318970  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.462968  325211 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:07.470084  325211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:18:07.506884  325211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:18:07.511834  325211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:18:07.511901  325211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:18:07.550104  325211 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:18:07.550130  325211 start.go:496] detecting cgroup driver to use...
	I1108 09:18:07.550167  325211 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:18:07.550207  325211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:18:07.568646  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:18:07.581696  325211 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:18:07.581749  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:18:07.598216  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:18:07.615476  325211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:18:07.707144  325211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:18:07.802881  325211 docker.go:234] disabling docker service ...
	I1108 09:18:07.802943  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:18:07.822170  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:18:07.836245  325211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:18:07.933480  325211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:18:08.019451  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:18:08.034231  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:18:08.048749  325211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:18:08.048808  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.061998  325211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:18:08.062059  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.072440  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.082524  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.092024  325211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:18:08.100534  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.110621  325211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.124570  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.133373  325211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:18:08.140578  325211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:18:08.147929  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.225503  325211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:18:08.341819  325211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:18:08.341873  325211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:18:08.345953  325211 start.go:564] Will wait 60s for crictl version
	I1108 09:18:08.346005  325211 ssh_runner.go:195] Run: which crictl
	I1108 09:18:08.349629  325211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:18:08.373232  325211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:18:08.373330  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.401094  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.430369  325211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:18:08.431733  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:18:08.449726  325211 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:18:08.453798  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.465344  325211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:18:08.466743  325211 kubeadm.go:884] updating cluster {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:18:08.466899  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:08.466970  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	W1108 09:18:09.576395  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:11.576747  318772 pod_ready.go:94] pod "coredns-66bc5c9577-x49dj" is "Ready"
	I1108 09:18:11.576778  318772 pod_ready.go:86] duration metric: took 38.005451155s for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.579411  318772 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.583270  318772 pod_ready.go:94] pod "etcd-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.583301  318772 pod_ready.go:86] duration metric: took 3.867249ms for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.585244  318772 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.588870  318772 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.588894  318772 pod_ready.go:86] duration metric: took 3.627506ms for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.590818  318772 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.775767  318772 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.775796  318772 pod_ready.go:86] duration metric: took 184.958059ms for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.976038  318772 pod_ready.go:83] waiting for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.376301  318772 pod_ready.go:94] pod "kube-proxy-5d9f2" is "Ready"
	I1108 09:18:12.376329  318772 pod_ready.go:86] duration metric: took 400.26953ms for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.575624  318772 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975734  318772 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:12.975759  318772 pod_ready.go:86] duration metric: took 400.106156ms for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975771  318772 pod_ready.go:40] duration metric: took 39.407892943s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:18:13.020618  318772 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:13.022494  318772 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-677902" cluster and "default" namespace by default
	I1108 09:18:08.499601  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.499621  325211 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:18:08.499662  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:08.525110  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.525134  325211 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:18:08.525142  325211 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:18:08.525219  325211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-620528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:18:08.525313  325211 ssh_runner.go:195] Run: crio config
	I1108 09:18:08.573327  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:08.573352  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:08.573372  325211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:18:08.573400  325211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-620528 NodeName:newest-cni-620528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:18:08.573547  325211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-620528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:18:08.573618  325211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:18:08.582404  325211 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:18:08.582472  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:18:08.590616  325211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:18:08.603619  325211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:18:08.618758  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:18:08.631660  325211 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:18:08.635374  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.645241  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.724266  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:08.747748  325211 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528 for IP: 192.168.103.2
	I1108 09:18:08.747771  325211 certs.go:195] generating shared ca certs ...
	I1108 09:18:08.747792  325211 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.747940  325211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:18:08.748002  325211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:18:08.748015  325211 certs.go:257] generating profile certs ...
	I1108 09:18:08.748090  325211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key
	I1108 09:18:08.748113  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt with IP's: []
	I1108 09:18:08.887418  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt ...
	I1108 09:18:08.887453  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt: {Name:mkef0a2461081e915a23a94a0dff129a9bbd1497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887643  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key ...
	I1108 09:18:08.887659  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key: {Name:mka694d89084bd9f4458105a6c692b710fbbc73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887768  325211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34
	I1108 09:18:08.887787  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:18:09.159862  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 ...
	I1108 09:18:09.159894  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34: {Name:mke1ad44d78f87b88058a3d23ddbc317f0d1879b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160086  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 ...
	I1108 09:18:09.160102  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34: {Name:mka8bc3506ee0b2250d13ad586c09c6d85151fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160232  325211 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt
	I1108 09:18:09.160351  325211 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key
	I1108 09:18:09.160445  325211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key
	I1108 09:18:09.160467  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt with IP's: []
	I1108 09:18:09.384382  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt ...
	I1108 09:18:09.384416  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt: {Name:mk66386520822ac037714f942e30945bee483e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384603  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key ...
	I1108 09:18:09.384629  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key: {Name:mk05f803707b48c031dab80c2b264c81f772d955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384853  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:18:09.384902  325211 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:18:09.384914  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:18:09.384954  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:18:09.384988  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:18:09.385020  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:18:09.385082  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:09.385692  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:18:09.404511  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:18:09.421750  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:18:09.438836  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:18:09.457312  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:18:09.475401  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:18:09.493660  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:18:09.511469  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:18:09.529325  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:18:09.548820  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:18:09.568542  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:18:09.587025  325211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:18:09.599630  325211 ssh_runner.go:195] Run: openssl version
	I1108 09:18:09.605604  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:18:09.613542  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617120  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617172  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.651950  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:18:09.660859  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:18:09.669386  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673162  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673215  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.708114  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:18:09.716962  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:18:09.725461  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729093  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729148  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.762764  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:18:09.771470  325211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:18:09.775240  325211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:18:09.775313  325211 kubeadm.go:401] StartCluster: {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:09.775379  325211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:09.775419  325211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:09.802548  325211 cri.go:89] found id: ""
	I1108 09:18:09.802614  325211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:18:09.810703  325211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:18:09.818391  325211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:18:09.818434  325211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:18:09.825944  325211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:18:09.825965  325211 kubeadm.go:158] found existing configuration files:
	
	I1108 09:18:09.826003  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:18:09.833772  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:18:09.833821  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:18:09.840883  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:18:09.848092  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:18:09.848152  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:18:09.855208  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.862522  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:18:09.862577  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.869810  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:18:09.877264  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:18:09.877332  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:18:09.884880  325211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:18:09.944123  325211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:18:10.005908  325211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:18:21.410632  325211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:18:21.410734  325211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:18:21.410861  325211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:18:21.410921  325211 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:18:21.410961  325211 kubeadm.go:319] OS: Linux
	I1108 09:18:21.411005  325211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:18:21.411051  325211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:18:21.411093  325211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:18:21.411168  325211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:18:21.411220  325211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:18:21.411259  325211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:18:21.411331  325211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:18:21.411374  325211 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:18:21.411467  325211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:18:21.411552  325211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:18:21.411625  325211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:18:21.411684  325211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:18:21.413538  325211 out.go:252]   - Generating certificates and keys ...
	I1108 09:18:21.413609  325211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:18:21.413671  325211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:18:21.413729  325211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:18:21.413779  325211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:18:21.413829  325211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:18:21.413879  325211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:18:21.413930  325211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:18:21.414043  325211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414143  325211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:18:21.414357  325211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414461  325211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:18:21.414548  325211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:18:21.414613  325211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:18:21.414686  325211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:18:21.414762  325211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:18:21.414828  325211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:18:21.414892  325211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:18:21.414984  325211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:18:21.415066  325211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:18:21.415150  325211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:18:21.415209  325211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:18:21.416674  325211 out.go:252]   - Booting up control plane ...
	I1108 09:18:21.416750  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:18:21.416832  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:18:21.416900  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:18:21.416989  325211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:18:21.417064  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:18:21.417169  325211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:18:21.417246  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:18:21.417298  325211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:18:21.417432  325211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:18:21.417536  325211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:18:21.417588  325211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0009061s
	I1108 09:18:21.417674  325211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:18:21.417744  325211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:18:21.417824  325211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:18:21.417894  325211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:18:21.417957  325211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.103306268s
	I1108 09:18:21.418014  325211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.592510436s
	I1108 09:18:21.418078  325211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501564724s
	I1108 09:18:21.418169  325211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:18:21.418299  325211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:18:21.418366  325211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:18:21.418547  325211 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-620528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:18:21.418595  325211 kubeadm.go:319] [bootstrap-token] Using token: dxtz3l.vknjl9wu6a3ee1z1
	I1108 09:18:21.421142  325211 out.go:252]   - Configuring RBAC rules ...
	I1108 09:18:21.421236  325211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:18:21.421349  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:18:21.421474  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:18:21.421579  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:18:21.421693  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:18:21.421785  325211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:18:21.421900  325211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:18:21.421940  325211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:18:21.421983  325211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:18:21.421989  325211 kubeadm.go:319] 
	I1108 09:18:21.422044  325211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:18:21.422051  325211 kubeadm.go:319] 
	I1108 09:18:21.422121  325211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:18:21.422127  325211 kubeadm.go:319] 
	I1108 09:18:21.422162  325211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:18:21.422254  325211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:18:21.422353  325211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:18:21.422364  325211 kubeadm.go:319] 
	I1108 09:18:21.422443  325211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:18:21.422453  325211 kubeadm.go:319] 
	I1108 09:18:21.422517  325211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:18:21.422527  325211 kubeadm.go:319] 
	I1108 09:18:21.422596  325211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:18:21.422682  325211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:18:21.422792  325211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:18:21.422804  325211 kubeadm.go:319] 
	I1108 09:18:21.422915  325211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:18:21.423005  325211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:18:21.423013  325211 kubeadm.go:319] 
	I1108 09:18:21.423077  325211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423178  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 09:18:21.423209  325211 kubeadm.go:319] 	--control-plane 
	I1108 09:18:21.423218  325211 kubeadm.go:319] 
	I1108 09:18:21.423320  325211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:18:21.423332  325211 kubeadm.go:319] 
	I1108 09:18:21.423415  325211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423522  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 09:18:21.423547  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:21.423556  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:21.424943  325211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:18:21.426074  325211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:18:21.430178  325211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:18:21.430194  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:18:21.443928  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:18:21.660106  325211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:18:21.660208  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.660242  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-620528 minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=newest-cni-620528 minikube.k8s.io/primary=true
	I1108 09:18:21.748522  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.748523  325211 ops.go:34] apiserver oom_adj: -16
	I1108 09:18:22.249505  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:22.749414  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.249638  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.749545  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.249056  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.749589  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.249218  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.748898  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.249409  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.325371  325211 kubeadm.go:1114] duration metric: took 4.665232347s to wait for elevateKubeSystemPrivileges
	I1108 09:18:26.325408  325211 kubeadm.go:403] duration metric: took 16.550096693s to StartCluster
	I1108 09:18:26.325428  325211 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.325506  325211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:26.326602  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.326868  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:18:26.326886  325211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:18:26.326952  325211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:18:26.327074  325211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-620528"
	I1108 09:18:26.327096  325211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-620528"
	I1108 09:18:26.327116  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:26.327134  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.327098  325211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-620528"
	I1108 09:18:26.327180  325211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-620528"
	I1108 09:18:26.327530  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.327692  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.328462  325211 out.go:179] * Verifying Kubernetes components...
	I1108 09:18:26.330054  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:26.353318  325211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:18:26.353369  325211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-620528"
	I1108 09:18:26.353412  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.353939  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.357811  325211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.357831  325211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:18:26.357895  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.384474  325211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.384501  325211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:18:26.384579  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.390090  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.410190  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.423195  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:18:26.475839  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:26.498839  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.519600  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.611163  325211 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:18:26.612332  325211 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:18:26.612389  325211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:18:26.813396  325211 api_server.go:72] duration metric: took 486.477097ms to wait for apiserver process to appear ...
	I1108 09:18:26.813427  325211 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:18:26.813448  325211 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:26.818119  325211 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:18:26.819017  325211 api_server.go:141] control plane version: v1.34.1
	I1108 09:18:26.819045  325211 api_server.go:131] duration metric: took 5.610526ms to wait for apiserver health ...
	I1108 09:18:26.819055  325211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:18:26.820067  325211 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:18:26.821184  325211 addons.go:515] duration metric: took 494.232955ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:18:26.822044  325211 system_pods.go:59] 8 kube-system pods found
	I1108 09:18:26.822071  325211 system_pods.go:61] "coredns-66bc5c9577-7fndk" [ee377f7d-6e12-40b3-9257-b0558cadc023] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822085  325211 system_pods.go:61] "etcd-newest-cni-620528" [d267a844-8f28-4d49-a9a3-f19643f494fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:18:26.822097  325211 system_pods.go:61] "kindnet-fk7tk" [8240271d-256f-4fde-82b4-0c071eb000b6] Running
	I1108 09:18:26.822110  325211 system_pods.go:61] "kube-apiserver-newest-cni-620528" [a9d10205-e74b-49a0-ab30-fc4274b6c40a] Running
	I1108 09:18:26.822119  325211 system_pods.go:61] "kube-controller-manager-newest-cni-620528" [5ca73710-f538-4265-a4f3-fe797f8e0cfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:18:26.822123  325211 system_pods.go:61] "kube-proxy-xrf7w" [ef13acfb-b7b4-4aba-8145-f2ce94813f8e] Running
	I1108 09:18:26.822130  325211 system_pods.go:61] "kube-scheduler-newest-cni-620528" [6dd7feec-3ba2-40c2-b761-3aa6855cf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:18:26.822134  325211 system_pods.go:61] "storage-provisioner" [4e2975a8-6a90-42a4-b1bb-b425b99ad8be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822142  325211 system_pods.go:74] duration metric: took 3.081159ms to wait for pod list to return data ...
	I1108 09:18:26.822150  325211 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:18:26.824190  325211 default_sa.go:45] found service account: "default"
	I1108 09:18:26.824207  325211 default_sa.go:55] duration metric: took 2.050725ms for default service account to be created ...
	I1108 09:18:26.824220  325211 kubeadm.go:587] duration metric: took 497.30609ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:26.824239  325211 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:18:26.826499  325211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:18:26.826520  325211 node_conditions.go:123] node cpu capacity is 8
	I1108 09:18:26.826531  325211 node_conditions.go:105] duration metric: took 2.287321ms to run NodePressure ...
	I1108 09:18:26.826540  325211 start.go:242] waiting for startup goroutines ...
	I1108 09:18:27.115331  325211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-620528" context rescaled to 1 replicas
	I1108 09:18:27.115377  325211 start.go:247] waiting for cluster config update ...
	I1108 09:18:27.115389  325211 start.go:256] writing updated cluster config ...
	I1108 09:18:27.115700  325211 ssh_runner.go:195] Run: rm -f paused
	I1108 09:18:27.175370  325211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:27.180420  325211 out.go:179] * Done! kubectl is now configured to use "newest-cni-620528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.680905025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681090146Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f012c581a0bed22f0df144ef8f7e090cd971d21858f894c9481f3694dcd5ecd/merged/etc/passwd: no such file or directory"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681113798Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f012c581a0bed22f0df144ef8f7e090cd971d21858f894c9481f3694dcd5ecd/merged/etc/group: no such file or directory"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681665818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.80555421Z" level=info msg="Created container ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e: kube-system/storage-provisioner/storage-provisioner" id=48bdab7c-4c5b-4ae6-9446-d37fd1e9f2a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.806375717Z" level=info msg="Starting container: ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e" id=24d13489-f460-42dc-9039-1b2e936e1a1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.809105654Z" level=info msg="Started container" PID=1714 containerID=ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e description=kube-system/storage-provisioner/storage-provisioner id=24d13489-f460-42dc-9039-1b2e936e1a1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f2dc084e3ed2eea0ca8b054c4aa5dd52a0ea12759f3bdbf1f2826b55ee9868d
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.229708799Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.233767895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.233804668Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.23384017Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237576808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237602994Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237625004Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241107706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241134071Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241157734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244592595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244621262Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244643702Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248053406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248072122Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248103651Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.25141806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.251442548Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED             STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ff736adc69a2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           24 seconds ago      Running             storage-provisioner         1                   7f2dc084e3ed2       storage-provisioner                                    kube-system
	d8a9d5717ea56       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           30 seconds ago      Exited              dashboard-metrics-scraper   2                   6b50a146c6527       dashboard-metrics-scraper-6ffb444bf9-kht2x             kubernetes-dashboard
	16b4255c8b001       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   46 seconds ago      Running             kubernetes-dashboard        0                   480c19e0056a9       kubernetes-dashboard-855c9754f9-tzbhn                  kubernetes-dashboard
	bc88d24433065       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           55 seconds ago      Running             coredns                     0                   9584ce0e50ed4       coredns-66bc5c9577-x49dj                               kube-system
	ec4a0c71166f5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           55 seconds ago      Running             busybox                     1                   8f970770da0a1       busybox                                                default
	336544864c96d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           55 seconds ago      Running             kube-proxy                  0                   73593da9a407a       kube-proxy-5d9f2                                       kube-system
	fd6ee9dfcc1f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           55 seconds ago      Exited              storage-provisioner         0                   7f2dc084e3ed2       storage-provisioner                                    kube-system
	590afcaf8e89d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           55 seconds ago      Running             kindnet-cni                 0                   423c9b2f1c3b7       kindnet-x89ph                                          kube-system
	8193c98b4facb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           58 seconds ago      Running             kube-controller-manager     0                   c2caf8e4393bc       kube-controller-manager-default-k8s-diff-port-677902   kube-system
	3ce4807537535       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           58 seconds ago      Running             kube-scheduler              0                   eaf186615b2a8       kube-scheduler-default-k8s-diff-port-677902            kube-system
	31e3f87ef285b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           58 seconds ago      Running             etcd                        0                   bb10aeee9afe0       etcd-default-k8s-diff-port-677902                      kube-system
	88d1ed66cd10f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           58 seconds ago      Running             kube-apiserver              0                   707a66d9d769d       kube-apiserver-default-k8s-diff-port-677902            kube-system
	
	
	==> coredns [bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43810 - 27394 "HINFO IN 5687207325388689829.965123986894590539. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031322325s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-677902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-677902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-677902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-677902
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-677902
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9a73a23a-0cc4-4911-a4ee-3b28faba34c9
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  kube-system                 coredns-66bc5c9577-x49dj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     108s
	  kube-system                 etcd-default-k8s-diff-port-677902                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         114s
	  kube-system                 kindnet-x89ph                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      108s
	  kube-system                 kube-apiserver-default-k8s-diff-port-677902             250m (3%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-677902    200m (2%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 kube-proxy-5d9f2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kube-system                 kube-scheduler-default-k8s-diff-port-677902             100m (1%)     0 (0%)      0 (0%)           0 (0%)         114s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         108s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kht2x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tzbhn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         53s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 107s                 kube-proxy       
	  Normal  Starting                 55s                  kube-proxy       
	  Normal  Starting                 119s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x8 over 119s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    114s                 kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  114s                 kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     114s                 kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  Starting                 114s                 kubelet          Starting kubelet.
	  Normal  RegisteredNode           109s                 node-controller  Node default-k8s-diff-port-677902 event: Registered Node default-k8s-diff-port-677902 in Controller
	  Normal  NodeReady                97s                  kubelet          Node default-k8s-diff-port-677902 status is now: NodeReady
	  Normal  Starting                 59s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     59s (x8 over 59s)    kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           53s                  node-controller  Node default-k8s-diff-port-677902 event: Registered Node default-k8s-diff-port-677902 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3] <==
	{"level":"warn","ts":"2025-11-08T09:17:31.404646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.417870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.426037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.432390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.446003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.454186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.460599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.467917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.476106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.491742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.498345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.504911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43290","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:18:02.685804Z","caller":"traceutil/trace.go:172","msg":"trace[1437782594] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"132.557491ms","start":"2025-11-08T09:18:02.553220Z","end":"2025-11-08T09:18:02.685777Z","steps":["trace[1437782594] 'read index received'  (duration: 132.549971ms)","trace[1437782594] 'applied index is now lower than readState.Index'  (duration: 6.185µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:18:02.685935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.69411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:18:02.686007Z","caller":"traceutil/trace.go:172","msg":"trace[807879678] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:646; }","duration":"132.786023ms","start":"2025-11-08T09:18:02.553212Z","end":"2025-11-08T09:18:02.685998Z","steps":["trace[807879678] 'agreement among raft nodes before linearized reading'  (duration: 132.643996ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:02.686112Z","caller":"traceutil/trace.go:172","msg":"trace[373780219] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"144.192959ms","start":"2025-11-08T09:18:02.541902Z","end":"2025-11-08T09:18:02.686095Z","steps":["trace[373780219] 'process raft request'  (duration: 144.047132ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:02.686416Z","caller":"traceutil/trace.go:172","msg":"trace[893313292] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"138.216563ms","start":"2025-11-08T09:18:02.548183Z","end":"2025-11-08T09:18:02.686400Z","steps":["trace[893313292] 'process raft request'  (duration: 138.132486ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:18:02.686456Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.642347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-x49dj\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-08T09:18:02.686500Z","caller":"traceutil/trace.go:172","msg":"trace[1456719482] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-x49dj; range_end:; response_count:1; response_revision:647; }","duration":"113.69979ms","start":"2025-11-08T09:18:02.572790Z","end":"2025-11-08T09:18:02.686490Z","steps":["trace[1456719482] 'agreement among raft nodes before linearized reading'  (duration: 113.504696ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.309416Z","caller":"traceutil/trace.go:172","msg":"trace[1126989899] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"150.856557ms","start":"2025-11-08T09:18:03.158535Z","end":"2025-11-08T09:18:03.309391Z","steps":["trace[1126989899] 'process raft request'  (duration: 125.638507ms)","trace[1126989899] 'compare'  (duration: 24.942951ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:18:03.678906Z","caller":"traceutil/trace.go:172","msg":"trace[637832210] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"105.935442ms","start":"2025-11-08T09:18:03.572929Z","end":"2025-11-08T09:18:03.678865Z","steps":["trace[637832210] 'read index received'  (duration: 105.925694ms)","trace[637832210] 'applied index is now lower than readState.Index'  (duration: 7.836µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:18:03.679146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.196421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-x49dj\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-08T09:18:03.679193Z","caller":"traceutil/trace.go:172","msg":"trace[455745962] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-x49dj; range_end:; response_count:1; response_revision:650; }","duration":"106.26111ms","start":"2025-11-08T09:18:03.572920Z","end":"2025-11-08T09:18:03.679181Z","steps":["trace[455745962] 'agreement among raft nodes before linearized reading'  (duration: 106.076564ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.679221Z","caller":"traceutil/trace.go:172","msg":"trace[1166764285] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"107.436536ms","start":"2025-11-08T09:18:03.571769Z","end":"2025-11-08T09:18:03.679205Z","steps":["trace[1166764285] 'process raft request'  (duration: 107.21514ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.742221Z","caller":"traceutil/trace.go:172","msg":"trace[604594362] transaction","detail":"{read_only:false; response_revision:652; number_of_response:1; }","duration":"167.19365ms","start":"2025-11-08T09:18:03.575007Z","end":"2025-11-08T09:18:03.742201Z","steps":["trace[604594362] 'process raft request'  (duration: 166.94534ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:18:28 up  1:00,  0 user,  load average: 3.51, 3.84, 2.60
	Linux default-k8s-diff-port-677902 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c] <==
	I1108 09:17:32.929684       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:32.929916       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:17:32.930072       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:32.930090       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:32.930111       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:33.229215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:33.229304       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:33.229369       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:33.229533       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:18:03.230536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:18:03.230565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:18:03.230571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:18:03.230531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:18:04.929634       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:18:04.929672       1 metrics.go:72] Registering metrics
	I1108 09:18:04.929748       1 controller.go:711] "Syncing nftables rules"
	I1108 09:18:13.229362       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:18:13.229408       1 main.go:301] handling current node
	I1108 09:18:23.236564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:18:23.236599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1] <==
	I1108 09:17:32.035317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:17:32.035323       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:17:32.035547       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:17:32.035551       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:32.035638       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:17:32.035741       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:17:32.035643       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:17:32.036335       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:32.036423       1 policy_source.go:240] refreshing policies
	I1108 09:17:32.042806       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:32.050818       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:17:32.056527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:32.067094       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:17:32.067156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:17:32.288402       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:32.319651       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:32.340068       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:32.348752       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:32.356538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:32.390558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.55.128"}
	I1108 09:17:32.400623       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.126.252"}
	I1108 09:17:32.938162       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:35.540203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:17:35.787822       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:35.837732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc] <==
	I1108 09:17:35.347171       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:17:35.384522       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:17:35.385531       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:17:35.385544       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:35.385572       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:17:35.385606       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:17:35.385642       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:17:35.385708       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:35.385727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:35.385731       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:35.386336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:35.386403       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:17:35.387702       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:35.388853       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:17:35.388921       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:17:35.388990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-677902"
	I1108 09:17:35.389026       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:17:35.391245       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:17:35.391334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:35.392333       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:17:35.394138       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:17:35.396338       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:17:35.401626       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:35.403906       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:17:35.410217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882] <==
	I1108 09:17:32.844589       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:32.924137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:33.024245       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:33.025530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:17:33.025669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:33.048429       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:33.048498       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:33.054750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:33.055317       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:33.055358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:33.058228       1 config.go:200] "Starting service config controller"
	I1108 09:17:33.058251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:33.058501       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:33.058572       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:33.058490       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:33.058652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:33.058744       1 config.go:309] "Starting node config controller"
	I1108 09:17:33.058753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:33.058761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:33.158418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:33.158631       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:33.158723       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242] <==
	I1108 09:17:30.481653       1 serving.go:386] Generated self-signed cert in-memory
	I1108 09:17:32.009454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:32.009486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:32.014530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 09:17:32.014569       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 09:17:32.014568       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:17:32.014565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.014593       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:17:32.014599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.014910       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:32.014932       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:32.115716       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 09:17:32.115737       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.115716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:35 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:35.985145     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgpjs\" (UniqueName: \"kubernetes.io/projected/ac0f9c47-0b03-4970-aa59-3a5c15e3435d-kube-api-access-xgpjs\") pod \"kubernetes-dashboard-855c9754f9-tzbhn\" (UID: \"ac0f9c47-0b03-4970-aa59-3a5c15e3435d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tzbhn"
	Nov 08 09:17:35 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:35.985320     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdv5\" (UniqueName: \"kubernetes.io/projected/6a00085b-d40d-40c1-8ce5-957bb382f725-kube-api-access-jcdv5\") pod \"dashboard-metrics-scraper-6ffb444bf9-kht2x\" (UID: \"6a00085b-d40d-40c1-8ce5-957bb382f725\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x"
	Nov 08 09:17:39 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:39.490452     726 scope.go:117] "RemoveContainer" containerID="1e754fbb5c5abc613e21f513108a486c674a8f708ec39c75c74e52b92d8b9da5"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:40.495908     726 scope.go:117] "RemoveContainer" containerID="1e754fbb5c5abc613e21f513108a486c674a8f708ec39c75c74e52b92d8b9da5"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:40.496195     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:40.496415     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.451326     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.500638     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:41.500842     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.520562     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tzbhn" podStartSLOduration=1.324542048 podStartE2EDuration="6.5205365s" podCreationTimestamp="2025-11-08 09:17:35 +0000 UTC" firstStartedPulling="2025-11-08 09:17:36.24105159 +0000 UTC m=+6.903517965" lastFinishedPulling="2025-11-08 09:17:41.437046035 +0000 UTC m=+12.099512417" observedRunningTime="2025-11-08 09:17:41.520362599 +0000 UTC m=+12.182828995" watchObservedRunningTime="2025-11-08 09:17:41.5205365 +0000 UTC m=+12.183002896"
	Nov 08 09:17:44 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:44.494743     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:44 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:44.494990     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:57 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:57.439392     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:58.548653     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:58.549056     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:58.549253     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:03 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:03.567487     726 scope.go:117] "RemoveContainer" containerID="fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	Nov 08 09:18:04 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:04.495577     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:18:04 default-k8s-diff-port-677902 kubelet[726]: E1108 09:18:04.495803     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:17 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:17.439337     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:18:17 default-k8s-diff-port-677902 kubelet[726]: E1108 09:18:17.439579     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd] <==
	2025/11/08 09:17:41 Starting overwatch
	2025/11/08 09:17:41 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:41 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:41 Using secret token for csrf signing
	2025/11/08 09:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:41 Generating JWE encryption key
	2025/11/08 09:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:41 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:41 Creating in-cluster Sidecar client
	2025/11/08 09:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:41 Serving insecurely on HTTP port: 9090
	2025/11/08 09:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01] <==
	I1108 09:17:32.809728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:18:02.812370       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e] <==
	I1108 09:18:03.820060       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:18:03.827812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:18:03.827865       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:18:03.847792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:07.303318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:11.563874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:15.162658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:18.216616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.238879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.244407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:18:21.244559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:18:21.244644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a8f9c03-6b30-4ca5-a9cb-a97fbf27f9a3", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd became leader
	I1108 09:18:21.244714       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd!
	W1108 09:18:21.249315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.252726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:18:21.345593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd!
	W1108 09:18:23.256230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:23.262073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:25.265945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:25.270227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:27.272969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:27.276922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902: exit status 2 (348.450076ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect default-k8s-diff-port-677902
helpers_test.go:243: (dbg) docker inspect default-k8s-diff-port-677902:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	        "Created": "2025-11-08T09:16:20.668171946Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 318967,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:17:23.250821933Z",
	            "FinishedAt": "2025-11-08T09:17:21.411042413Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hostname",
	        "HostsPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/hosts",
	        "LogPath": "/var/lib/docker/containers/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2/1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2-json.log",
	        "Name": "/default-k8s-diff-port-677902",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "default-k8s-diff-port-677902:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "default-k8s-diff-port-677902",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "1e7d7f902c4f0196b7683c373deef697ee0d65615b34da3abc1eb091f65fd6d2",
	                "LowerDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/merged",
	                "UpperDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/diff",
	                "WorkDir": "/var/lib/docker/overlay2/279e8326977141575ea289cc33cfd2b04d789374983c5275b73b2f6b93032ff1/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "default-k8s-diff-port-677902",
	                "Source": "/var/lib/docker/volumes/default-k8s-diff-port-677902/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "default-k8s-diff-port-677902",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8444/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "name.minikube.sigs.k8s.io": "default-k8s-diff-port-677902",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "0ca84e8084a3047063b58262c0027eaf231551809613138b072a50a58760f050",
	            "SandboxKey": "/var/run/docker/netns/0ca84e8084a3",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33124"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33125"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33128"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33126"
	                    }
	                ],
	                "8444/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33127"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "default-k8s-diff-port-677902": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.76.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "1a:1c:29:c9:91:10",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "3530cc966e776b586ccf4d2edbdd1f526df4bef1d7edd4ef4684fbf79284383f",
	                    "EndpointID": "f5ee254f6dee0fa1b88220e46eb514e1cb885e9fd1251762c7d324501893de50",
	                    "Gateway": "192.168.76.1",
	                    "IPAddress": "192.168.76.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "default-k8s-diff-port-677902",
	                        "1e7d7f902c4f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902: exit status 2 (345.287969ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25
E1108 09:18:30.080145    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:255: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-677902 logs -n 25: (1.093404536s)
helpers_test.go:260: TestStartStop/group/default-k8s-diff-port/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-677902 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-677902 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ stop    │ -p newest-cni-620528 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:18:00.076917  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:02.690159  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:58.696492  325211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:17:58.696765  325211 start.go:159] libmachine.API.Create for "newest-cni-620528" (driver="docker")
	I1108 09:17:58.696803  325211 client.go:173] LocalClient.Create starting
	I1108 09:17:58.696917  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:17:58.696958  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.696982  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697061  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:17:58.697100  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.697116  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697562  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:17:58.717266  325211 cli_runner.go:211] docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:17:58.717347  325211 network_create.go:284] running [docker network inspect newest-cni-620528] to gather additional debugging logs...
	I1108 09:17:58.717379  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528
	W1108 09:17:58.736456  325211 cli_runner.go:211] docker network inspect newest-cni-620528 returned with exit code 1
	I1108 09:17:58.736492  325211 network_create.go:287] error running [docker network inspect newest-cni-620528]: docker network inspect newest-cni-620528: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-620528 not found
	I1108 09:17:58.736508  325211 network_create.go:289] output of [docker network inspect newest-cni-620528]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-620528 not found
	
	** /stderr **
	I1108 09:17:58.736599  325211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:58.758028  325211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:17:58.758799  325211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:17:58.759757  325211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:17:58.760782  325211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3530cc966e77 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:ab:9a:62:0b:ef} reservation:<nil>}
	I1108 09:17:58.761727  325211 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ea0d0f62e0b2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:91:c3:f9:f2:45} reservation:<nil>}
	I1108 09:17:58.762519  325211 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d2c6206fd833 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:72:29:08:bd:5d} reservation:<nil>}
	I1108 09:17:58.764114  325211 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8c0d0}
	I1108 09:17:58.764142  325211 network_create.go:124] attempt to create docker network newest-cni-620528 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:17:58.764193  325211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-620528 newest-cni-620528
	I1108 09:17:58.832507  325211 network_create.go:108] docker network newest-cni-620528 192.168.103.0/24 created
	I1108 09:17:58.832544  325211 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-620528" container
	I1108 09:17:58.832610  325211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:17:58.853554  325211 cli_runner.go:164] Run: docker volume create newest-cni-620528 --label name.minikube.sigs.k8s.io=newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:17:58.877252  325211 oci.go:103] Successfully created a docker volume newest-cni-620528
	I1108 09:17:58.877433  325211 cli_runner.go:164] Run: docker run --rm --name newest-cni-620528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --entrypoint /usr/bin/test -v newest-cni-620528:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:17:59.367458  325211 oci.go:107] Successfully prepared a docker volume newest-cni-620528
	I1108 09:17:59.367498  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:59.367522  325211 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:17:59.367593  325211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:18:05.076934  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:07.078212  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:04.272478  325211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.904840042s)
	I1108 09:18:04.272514  325211 kic.go:203] duration metric: took 4.90498935s to extract preloaded images to volume ...
	W1108 09:18:04.272612  325211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:18:04.272742  325211 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:18:04.272940  325211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:18:04.343948  325211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-620528 --name newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-620528 --network newest-cni-620528 --ip 192.168.103.2 --volume newest-cni-620528:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:18:04.742474  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Running}}
	I1108 09:18:04.764312  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:04.784485  325211 cli_runner.go:164] Run: docker exec newest-cni-620528 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:18:04.838693  325211 oci.go:144] the created container "newest-cni-620528" has a running status.
	I1108 09:18:04.838725  325211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa...
	I1108 09:18:05.369787  325211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:18:05.457128  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.479326  325211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:18:05.479354  325211 kic_runner.go:114] Args: [docker exec --privileged newest-cni-620528 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:18:05.539352  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.562723  325211 machine.go:94] provisionDockerMachine start ...
	I1108 09:18:05.562853  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.583585  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.583921  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.583937  325211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:18:05.727446  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.727474  325211 ubuntu.go:182] provisioning hostname "newest-cni-620528"
	I1108 09:18:05.727531  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.746860  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.747202  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.747227  325211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-620528 && echo "newest-cni-620528" | sudo tee /etc/hostname
	I1108 09:18:05.888726  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.888814  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.908669  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.908892  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.908930  325211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-620528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-620528/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-620528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:18:06.037040  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:18:06.037068  325211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:18:06.037142  325211 ubuntu.go:190] setting up certificates
	I1108 09:18:06.037152  325211 provision.go:84] configureAuth start
	I1108 09:18:06.037215  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:06.055504  325211 provision.go:143] copyHostCerts
	I1108 09:18:06.055556  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:18:06.055570  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:18:06.055648  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:18:06.055756  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:18:06.055768  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:18:06.055809  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:18:06.055888  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:18:06.055898  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:18:06.055933  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:18:06.056003  325211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-620528 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-620528]
	I1108 09:18:06.537976  325211 provision.go:177] copyRemoteCerts
	I1108 09:18:06.538036  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:18:06.538071  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.557256  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:06.654533  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:18:06.676656  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:18:06.695147  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:18:06.716798  325211 provision.go:87] duration metric: took 679.62911ms to configureAuth
	I1108 09:18:06.716829  325211 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:18:06.717067  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:06.717198  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.738275  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:06.738563  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:06.738581  325211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:18:06.981160  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:18:06.981185  325211 machine.go:97] duration metric: took 1.418436634s to provisionDockerMachine
	I1108 09:18:06.981197  325211 client.go:176] duration metric: took 8.28438328s to LocalClient.Create
	I1108 09:18:06.981213  325211 start.go:167] duration metric: took 8.284449883s to libmachine.API.Create "newest-cni-620528"
	I1108 09:18:06.981223  325211 start.go:293] postStartSetup for "newest-cni-620528" (driver="docker")
	I1108 09:18:06.981235  325211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:18:06.981314  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:18:06.981372  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.002647  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.105621  325211 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:18:07.109460  325211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:18:07.109484  325211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:18:07.109499  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:18:07.109560  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:18:07.109672  325211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:18:07.109799  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:18:07.117996  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:07.140135  325211 start.go:296] duration metric: took 158.897937ms for postStartSetup
	I1108 09:18:07.140513  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.161877  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:07.162158  325211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:18:07.162210  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.180553  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.271941  325211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:18:07.276948  325211 start.go:128] duration metric: took 8.583931143s to createHost
	I1108 09:18:07.276971  325211 start.go:83] releasing machines lock for "newest-cni-620528", held for 8.584057332s
	I1108 09:18:07.277031  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.295640  325211 ssh_runner.go:195] Run: cat /version.json
	I1108 09:18:07.295700  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.295708  325211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:18:07.295767  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.316331  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.318970  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.462968  325211 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:07.470084  325211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:18:07.506884  325211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:18:07.511834  325211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:18:07.511901  325211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:18:07.550104  325211 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:18:07.550130  325211 start.go:496] detecting cgroup driver to use...
	I1108 09:18:07.550167  325211 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:18:07.550207  325211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:18:07.568646  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:18:07.581696  325211 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:18:07.581749  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:18:07.598216  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:18:07.615476  325211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:18:07.707144  325211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:18:07.802881  325211 docker.go:234] disabling docker service ...
	I1108 09:18:07.802943  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:18:07.822170  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:18:07.836245  325211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:18:07.933480  325211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:18:08.019451  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:18:08.034231  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:18:08.048749  325211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:18:08.048808  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.061998  325211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:18:08.062059  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.072440  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.082524  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.092024  325211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:18:08.100534  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.110621  325211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.124570  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.133373  325211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:18:08.140578  325211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:18:08.147929  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.225503  325211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:18:08.341819  325211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:18:08.341873  325211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:18:08.345953  325211 start.go:564] Will wait 60s for crictl version
	I1108 09:18:08.346005  325211 ssh_runner.go:195] Run: which crictl
	I1108 09:18:08.349629  325211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:18:08.373232  325211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:18:08.373330  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.401094  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.430369  325211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:18:08.431733  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:18:08.449726  325211 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:18:08.453798  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.465344  325211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:18:08.466743  325211 kubeadm.go:884] updating cluster {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:18:08.466899  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:08.466970  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	W1108 09:18:09.576395  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:11.576747  318772 pod_ready.go:94] pod "coredns-66bc5c9577-x49dj" is "Ready"
	I1108 09:18:11.576778  318772 pod_ready.go:86] duration metric: took 38.005451155s for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.579411  318772 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.583270  318772 pod_ready.go:94] pod "etcd-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.583301  318772 pod_ready.go:86] duration metric: took 3.867249ms for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.585244  318772 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.588870  318772 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.588894  318772 pod_ready.go:86] duration metric: took 3.627506ms for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.590818  318772 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.775767  318772 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.775796  318772 pod_ready.go:86] duration metric: took 184.958059ms for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.976038  318772 pod_ready.go:83] waiting for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.376301  318772 pod_ready.go:94] pod "kube-proxy-5d9f2" is "Ready"
	I1108 09:18:12.376329  318772 pod_ready.go:86] duration metric: took 400.26953ms for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.575624  318772 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975734  318772 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:12.975759  318772 pod_ready.go:86] duration metric: took 400.106156ms for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975771  318772 pod_ready.go:40] duration metric: took 39.407892943s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:18:13.020618  318772 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:13.022494  318772 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-677902" cluster and "default" namespace by default
	I1108 09:18:08.499601  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.499621  325211 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:18:08.499662  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:08.525110  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.525134  325211 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:18:08.525142  325211 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:18:08.525219  325211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-620528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:18:08.525313  325211 ssh_runner.go:195] Run: crio config
	I1108 09:18:08.573327  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:08.573352  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:08.573372  325211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:18:08.573400  325211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-620528 NodeName:newest-cni-620528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:18:08.573547  325211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-620528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:18:08.573618  325211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:18:08.582404  325211 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:18:08.582472  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:18:08.590616  325211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:18:08.603619  325211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:18:08.618758  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:18:08.631660  325211 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:18:08.635374  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.645241  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.724266  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:08.747748  325211 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528 for IP: 192.168.103.2
	I1108 09:18:08.747771  325211 certs.go:195] generating shared ca certs ...
	I1108 09:18:08.747792  325211 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.747940  325211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:18:08.748002  325211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:18:08.748015  325211 certs.go:257] generating profile certs ...
	I1108 09:18:08.748090  325211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key
	I1108 09:18:08.748113  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt with IP's: []
	I1108 09:18:08.887418  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt ...
	I1108 09:18:08.887453  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt: {Name:mkef0a2461081e915a23a94a0dff129a9bbd1497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887643  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key ...
	I1108 09:18:08.887659  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key: {Name:mka694d89084bd9f4458105a6c692b710fbbc73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887768  325211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34
	I1108 09:18:08.887787  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:18:09.159862  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 ...
	I1108 09:18:09.159894  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34: {Name:mke1ad44d78f87b88058a3d23ddbc317f0d1879b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160086  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 ...
	I1108 09:18:09.160102  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34: {Name:mka8bc3506ee0b2250d13ad586c09c6d85151fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160232  325211 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt
	I1108 09:18:09.160351  325211 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key
	I1108 09:18:09.160445  325211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key
	I1108 09:18:09.160467  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt with IP's: []
	I1108 09:18:09.384382  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt ...
	I1108 09:18:09.384416  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt: {Name:mk66386520822ac037714f942e30945bee483e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384603  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key ...
	I1108 09:18:09.384629  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key: {Name:mk05f803707b48c031dab80c2b264c81f772d955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384853  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:18:09.384902  325211 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:18:09.384914  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:18:09.384954  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:18:09.384988  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:18:09.385020  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:18:09.385082  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:09.385692  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:18:09.404511  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:18:09.421750  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:18:09.438836  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:18:09.457312  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:18:09.475401  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:18:09.493660  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:18:09.511469  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:18:09.529325  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:18:09.548820  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:18:09.568542  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:18:09.587025  325211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:18:09.599630  325211 ssh_runner.go:195] Run: openssl version
	I1108 09:18:09.605604  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:18:09.613542  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617120  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617172  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.651950  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:18:09.660859  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:18:09.669386  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673162  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673215  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.708114  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:18:09.716962  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:18:09.725461  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729093  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729148  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.762764  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:18:09.771470  325211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:18:09.775240  325211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:18:09.775313  325211 kubeadm.go:401] StartCluster: {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:09.775379  325211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:09.775419  325211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:09.802548  325211 cri.go:89] found id: ""
	I1108 09:18:09.802614  325211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:18:09.810703  325211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:18:09.818391  325211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:18:09.818434  325211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:18:09.825944  325211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:18:09.825965  325211 kubeadm.go:158] found existing configuration files:
	
	I1108 09:18:09.826003  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:18:09.833772  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:18:09.833821  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:18:09.840883  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:18:09.848092  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:18:09.848152  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:18:09.855208  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.862522  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:18:09.862577  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.869810  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:18:09.877264  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:18:09.877332  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:18:09.884880  325211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:18:09.944123  325211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:18:10.005908  325211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:18:21.410632  325211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:18:21.410734  325211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:18:21.410861  325211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:18:21.410921  325211 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:18:21.410961  325211 kubeadm.go:319] OS: Linux
	I1108 09:18:21.411005  325211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:18:21.411051  325211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:18:21.411093  325211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:18:21.411168  325211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:18:21.411220  325211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:18:21.411259  325211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:18:21.411331  325211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:18:21.411374  325211 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:18:21.411467  325211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:18:21.411552  325211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:18:21.411625  325211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:18:21.411684  325211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:18:21.413538  325211 out.go:252]   - Generating certificates and keys ...
	I1108 09:18:21.413609  325211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:18:21.413671  325211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:18:21.413729  325211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:18:21.413779  325211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:18:21.413829  325211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:18:21.413879  325211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:18:21.413930  325211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:18:21.414043  325211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414143  325211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:18:21.414357  325211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414461  325211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:18:21.414548  325211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:18:21.414613  325211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:18:21.414686  325211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:18:21.414762  325211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:18:21.414828  325211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:18:21.414892  325211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:18:21.414984  325211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:18:21.415066  325211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:18:21.415150  325211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:18:21.415209  325211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:18:21.416674  325211 out.go:252]   - Booting up control plane ...
	I1108 09:18:21.416750  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:18:21.416832  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:18:21.416900  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:18:21.416989  325211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:18:21.417064  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:18:21.417169  325211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:18:21.417246  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:18:21.417298  325211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:18:21.417432  325211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:18:21.417536  325211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:18:21.417588  325211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0009061s
	I1108 09:18:21.417674  325211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:18:21.417744  325211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:18:21.417824  325211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:18:21.417894  325211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:18:21.417957  325211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.103306268s
	I1108 09:18:21.418014  325211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.592510436s
	I1108 09:18:21.418078  325211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501564724s
	I1108 09:18:21.418169  325211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:18:21.418299  325211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:18:21.418366  325211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:18:21.418547  325211 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-620528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:18:21.418595  325211 kubeadm.go:319] [bootstrap-token] Using token: dxtz3l.vknjl9wu6a3ee1z1
	I1108 09:18:21.421142  325211 out.go:252]   - Configuring RBAC rules ...
	I1108 09:18:21.421236  325211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:18:21.421349  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:18:21.421474  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:18:21.421579  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:18:21.421693  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:18:21.421785  325211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:18:21.421900  325211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:18:21.421940  325211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:18:21.421983  325211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:18:21.421989  325211 kubeadm.go:319] 
	I1108 09:18:21.422044  325211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:18:21.422051  325211 kubeadm.go:319] 
	I1108 09:18:21.422121  325211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:18:21.422127  325211 kubeadm.go:319] 
	I1108 09:18:21.422162  325211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:18:21.422254  325211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:18:21.422353  325211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:18:21.422364  325211 kubeadm.go:319] 
	I1108 09:18:21.422443  325211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:18:21.422453  325211 kubeadm.go:319] 
	I1108 09:18:21.422517  325211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:18:21.422527  325211 kubeadm.go:319] 
	I1108 09:18:21.422596  325211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:18:21.422682  325211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:18:21.422792  325211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:18:21.422804  325211 kubeadm.go:319] 
	I1108 09:18:21.422915  325211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:18:21.423005  325211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:18:21.423013  325211 kubeadm.go:319] 
	I1108 09:18:21.423077  325211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423178  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 09:18:21.423209  325211 kubeadm.go:319] 	--control-plane 
	I1108 09:18:21.423218  325211 kubeadm.go:319] 
	I1108 09:18:21.423320  325211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:18:21.423332  325211 kubeadm.go:319] 
	I1108 09:18:21.423415  325211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423522  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 09:18:21.423547  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:21.423556  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:21.424943  325211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:18:21.426074  325211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:18:21.430178  325211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:18:21.430194  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:18:21.443928  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:18:21.660106  325211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:18:21.660208  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.660242  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-620528 minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=newest-cni-620528 minikube.k8s.io/primary=true
	I1108 09:18:21.748522  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.748523  325211 ops.go:34] apiserver oom_adj: -16
	I1108 09:18:22.249505  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:22.749414  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.249638  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.749545  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.249056  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.749589  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.249218  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.748898  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.249409  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.325371  325211 kubeadm.go:1114] duration metric: took 4.665232347s to wait for elevateKubeSystemPrivileges
	I1108 09:18:26.325408  325211 kubeadm.go:403] duration metric: took 16.550096693s to StartCluster
	I1108 09:18:26.325428  325211 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.325506  325211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:26.326602  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.326868  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:18:26.326886  325211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:18:26.326952  325211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:18:26.327074  325211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-620528"
	I1108 09:18:26.327096  325211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-620528"
	I1108 09:18:26.327116  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:26.327134  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.327098  325211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-620528"
	I1108 09:18:26.327180  325211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-620528"
	I1108 09:18:26.327530  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.327692  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.328462  325211 out.go:179] * Verifying Kubernetes components...
	I1108 09:18:26.330054  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:26.353318  325211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:18:26.353369  325211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-620528"
	I1108 09:18:26.353412  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.353939  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.357811  325211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.357831  325211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:18:26.357895  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.384474  325211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.384501  325211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:18:26.384579  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.390090  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.410190  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.423195  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:18:26.475839  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:26.498839  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.519600  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.611163  325211 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:18:26.612332  325211 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:18:26.612389  325211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:18:26.813396  325211 api_server.go:72] duration metric: took 486.477097ms to wait for apiserver process to appear ...
	I1108 09:18:26.813427  325211 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:18:26.813448  325211 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:26.818119  325211 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:18:26.819017  325211 api_server.go:141] control plane version: v1.34.1
	I1108 09:18:26.819045  325211 api_server.go:131] duration metric: took 5.610526ms to wait for apiserver health ...
	I1108 09:18:26.819055  325211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:18:26.820067  325211 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:18:26.821184  325211 addons.go:515] duration metric: took 494.232955ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:18:26.822044  325211 system_pods.go:59] 8 kube-system pods found
	I1108 09:18:26.822071  325211 system_pods.go:61] "coredns-66bc5c9577-7fndk" [ee377f7d-6e12-40b3-9257-b0558cadc023] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822085  325211 system_pods.go:61] "etcd-newest-cni-620528" [d267a844-8f28-4d49-a9a3-f19643f494fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:18:26.822097  325211 system_pods.go:61] "kindnet-fk7tk" [8240271d-256f-4fde-82b4-0c071eb000b6] Running
	I1108 09:18:26.822110  325211 system_pods.go:61] "kube-apiserver-newest-cni-620528" [a9d10205-e74b-49a0-ab30-fc4274b6c40a] Running
	I1108 09:18:26.822119  325211 system_pods.go:61] "kube-controller-manager-newest-cni-620528" [5ca73710-f538-4265-a4f3-fe797f8e0cfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:18:26.822123  325211 system_pods.go:61] "kube-proxy-xrf7w" [ef13acfb-b7b4-4aba-8145-f2ce94813f8e] Running
	I1108 09:18:26.822130  325211 system_pods.go:61] "kube-scheduler-newest-cni-620528" [6dd7feec-3ba2-40c2-b761-3aa6855cf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:18:26.822134  325211 system_pods.go:61] "storage-provisioner" [4e2975a8-6a90-42a4-b1bb-b425b99ad8be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822142  325211 system_pods.go:74] duration metric: took 3.081159ms to wait for pod list to return data ...
	I1108 09:18:26.822150  325211 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:18:26.824190  325211 default_sa.go:45] found service account: "default"
	I1108 09:18:26.824207  325211 default_sa.go:55] duration metric: took 2.050725ms for default service account to be created ...
	I1108 09:18:26.824220  325211 kubeadm.go:587] duration metric: took 497.30609ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:26.824239  325211 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:18:26.826499  325211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:18:26.826520  325211 node_conditions.go:123] node cpu capacity is 8
	I1108 09:18:26.826531  325211 node_conditions.go:105] duration metric: took 2.287321ms to run NodePressure ...
	I1108 09:18:26.826540  325211 start.go:242] waiting for startup goroutines ...
	I1108 09:18:27.115331  325211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-620528" context rescaled to 1 replicas
	I1108 09:18:27.115377  325211 start.go:247] waiting for cluster config update ...
	I1108 09:18:27.115389  325211 start.go:256] writing updated cluster config ...
	I1108 09:18:27.115700  325211 ssh_runner.go:195] Run: rm -f paused
	I1108 09:18:27.175370  325211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:27.180420  325211 out.go:179] * Done! kubectl is now configured to use "newest-cni-620528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.680905025Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681090146Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/6f012c581a0bed22f0df144ef8f7e090cd971d21858f894c9481f3694dcd5ecd/merged/etc/passwd: no such file or directory"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681113798Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/6f012c581a0bed22f0df144ef8f7e090cd971d21858f894c9481f3694dcd5ecd/merged/etc/group: no such file or directory"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.681665818Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.80555421Z" level=info msg="Created container ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e: kube-system/storage-provisioner/storage-provisioner" id=48bdab7c-4c5b-4ae6-9446-d37fd1e9f2a2 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.806375717Z" level=info msg="Starting container: ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e" id=24d13489-f460-42dc-9039-1b2e936e1a1a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:03 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:03.809105654Z" level=info msg="Started container" PID=1714 containerID=ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e description=kube-system/storage-provisioner/storage-provisioner id=24d13489-f460-42dc-9039-1b2e936e1a1a name=/runtime.v1.RuntimeService/StartContainer sandboxID=7f2dc084e3ed2eea0ca8b054c4aa5dd52a0ea12759f3bdbf1f2826b55ee9868d
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.229708799Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.233767895Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.233804668Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.23384017Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237576808Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237602994Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.237625004Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241107706Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241134071Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.241157734Z" level=info msg="CNI monitoring event WRITE         \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244592595Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244621262Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.244643702Z" level=info msg="CNI monitoring event RENAME        \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248053406Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248072122Z" level=info msg="Updated default CNI network name to kindnet"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.248103651Z" level=info msg="CNI monitoring event CREATE        \"/etc/cni/net.d/10-kindnet.conflist\" ← \"/etc/cni/net.d/10-kindnet.conflist.temp\""
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.25141806Z" level=info msg="Found CNI network kindnet (type=ptp) at /etc/cni/net.d/10-kindnet.conflist"
	Nov 08 09:18:13 default-k8s-diff-port-677902 crio[568]: time="2025-11-08T09:18:13.251442548Z" level=info msg="Updated default CNI network name to kindnet"
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                      CREATED              STATE               NAME                        ATTEMPT             POD ID              POD                                                    NAMESPACE
	ff736adc69a2b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           26 seconds ago       Running             storage-provisioner         1                   7f2dc084e3ed2       storage-provisioner                                    kube-system
	d8a9d5717ea56       a90209bb39e3d7b5fc9daf60c17044ea969aaca0333d672d8c7a34c7446e7ff7                                           32 seconds ago       Exited              dashboard-metrics-scraper   2                   6b50a146c6527       dashboard-metrics-scraper-6ffb444bf9-kht2x             kubernetes-dashboard
	16b4255c8b001       docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029   48 seconds ago       Running             kubernetes-dashboard        0                   480c19e0056a9       kubernetes-dashboard-855c9754f9-tzbhn                  kubernetes-dashboard
	bc88d24433065       52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969                                           57 seconds ago       Running             coredns                     0                   9584ce0e50ed4       coredns-66bc5c9577-x49dj                               kube-system
	ec4a0c71166f5       56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c                                           57 seconds ago       Running             busybox                     1                   8f970770da0a1       busybox                                                default
	336544864c96d       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7                                           57 seconds ago       Running             kube-proxy                  0                   73593da9a407a       kube-proxy-5d9f2                                       kube-system
	fd6ee9dfcc1f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                           57 seconds ago       Exited              storage-provisioner         0                   7f2dc084e3ed2       storage-provisioner                                    kube-system
	590afcaf8e89d       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c                                           57 seconds ago       Running             kindnet-cni                 0                   423c9b2f1c3b7       kindnet-x89ph                                          kube-system
	8193c98b4facb       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f                                           About a minute ago   Running             kube-controller-manager     0                   c2caf8e4393bc       kube-controller-manager-default-k8s-diff-port-677902   kube-system
	3ce4807537535       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813                                           About a minute ago   Running             kube-scheduler              0                   eaf186615b2a8       kube-scheduler-default-k8s-diff-port-677902            kube-system
	31e3f87ef285b       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115                                           About a minute ago   Running             etcd                        0                   bb10aeee9afe0       etcd-default-k8s-diff-port-677902                      kube-system
	88d1ed66cd10f       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97                                           About a minute ago   Running             kube-apiserver              0                   707a66d9d769d       kube-apiserver-default-k8s-diff-port-677902            kube-system
	
	
	==> coredns [bc88d24433065f56e713adcbcdcd3129f3222bccd28d3e8c4e897902b34dee73] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 3e2243e8b9e7116f563b83b1933f477a68ba9ad4a829ed5d7e54629fb2ce53528b9bc6023030be20be434ad805fd246296dd428c64e9bbef3a70f22b8621f560
	CoreDNS-1.12.1
	linux/amd64, go1.24.1, 707c7c1
	[INFO] 127.0.0.1:43810 - 27394 "HINFO IN 5687207325388689829.965123986894590539. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.031322325s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.32.3/tools/cache/reflector.go:251: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: Unhandled Error
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-677902
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-677902
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=default-k8s-diff-port-677902
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_16_35_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:16:32 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-677902
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:30 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Sat, 08 Nov 2025 09:18:02 +0000   Sat, 08 Nov 2025 09:16:51 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.76.2
	  Hostname:    default-k8s-diff-port-677902
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                9a73a23a-0cc4-4911-a4ee-3b28faba34c9
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         96s
	  kube-system                 coredns-66bc5c9577-x49dj                                100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     110s
	  kube-system                 etcd-default-k8s-diff-port-677902                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         116s
	  kube-system                 kindnet-x89ph                                           100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      110s
	  kube-system                 kube-apiserver-default-k8s-diff-port-677902             250m (3%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-677902    200m (2%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 kube-proxy-5d9f2                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kube-system                 kube-scheduler-default-k8s-diff-port-677902             100m (1%)     0 (0%)      0 (0%)           0 (0%)         116s
	  kube-system                 storage-provisioner                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         110s
	  kubernetes-dashboard        dashboard-metrics-scraper-6ffb444bf9-kht2x              0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	  kubernetes-dashboard        kubernetes-dashboard-855c9754f9-tzbhn                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         55s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 109s               kube-proxy       
	  Normal  Starting                 57s                kube-proxy       
	  Normal  Starting                 2m1s               kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m (x8 over 2m1s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    116s               kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  116s               kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     116s               kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  Starting                 116s               kubelet          Starting kubelet.
	  Normal  RegisteredNode           111s               node-controller  Node default-k8s-diff-port-677902 event: Registered Node default-k8s-diff-port-677902 in Controller
	  Normal  NodeReady                99s                kubelet          Node default-k8s-diff-port-677902 status is now: NodeReady
	  Normal  Starting                 61s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     61s (x8 over 61s)  kubelet          Node default-k8s-diff-port-677902 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           55s                node-controller  Node default-k8s-diff-port-677902 event: Registered Node default-k8s-diff-port-677902 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [31e3f87ef285bb6886ab7986f8cb89416c41f9e9f569efe93d65730cd71d0db3] <==
	{"level":"warn","ts":"2025-11-08T09:17:31.404646Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43054","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.417870Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43088","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.426037Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43116","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.432390Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43132","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.446003Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43164","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.454186Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43178","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.460599Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43192","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.467917Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43212","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.476106Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43226","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.491742Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43246","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.498345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43262","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:17:31.504911Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:43290","server-name":"","error":"EOF"}
	{"level":"info","ts":"2025-11-08T09:18:02.685804Z","caller":"traceutil/trace.go:172","msg":"trace[1437782594] linearizableReadLoop","detail":"{readStateIndex:681; appliedIndex:681; }","duration":"132.557491ms","start":"2025-11-08T09:18:02.553220Z","end":"2025-11-08T09:18:02.685777Z","steps":["trace[1437782594] 'read index received'  (duration: 132.549971ms)","trace[1437782594] 'applied index is now lower than readState.Index'  (duration: 6.185µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:18:02.685935Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"132.69411ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-11-08T09:18:02.686007Z","caller":"traceutil/trace.go:172","msg":"trace[807879678] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:646; }","duration":"132.786023ms","start":"2025-11-08T09:18:02.553212Z","end":"2025-11-08T09:18:02.685998Z","steps":["trace[807879678] 'agreement among raft nodes before linearized reading'  (duration: 132.643996ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:02.686112Z","caller":"traceutil/trace.go:172","msg":"trace[373780219] transaction","detail":"{read_only:false; response_revision:647; number_of_response:1; }","duration":"144.192959ms","start":"2025-11-08T09:18:02.541902Z","end":"2025-11-08T09:18:02.686095Z","steps":["trace[373780219] 'process raft request'  (duration: 144.047132ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:02.686416Z","caller":"traceutil/trace.go:172","msg":"trace[893313292] transaction","detail":"{read_only:false; response_revision:648; number_of_response:1; }","duration":"138.216563ms","start":"2025-11-08T09:18:02.548183Z","end":"2025-11-08T09:18:02.686400Z","steps":["trace[893313292] 'process raft request'  (duration: 138.132486ms)"],"step_count":1}
	{"level":"warn","ts":"2025-11-08T09:18:02.686456Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"113.642347ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-x49dj\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-08T09:18:02.686500Z","caller":"traceutil/trace.go:172","msg":"trace[1456719482] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-x49dj; range_end:; response_count:1; response_revision:647; }","duration":"113.69979ms","start":"2025-11-08T09:18:02.572790Z","end":"2025-11-08T09:18:02.686490Z","steps":["trace[1456719482] 'agreement among raft nodes before linearized reading'  (duration: 113.504696ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.309416Z","caller":"traceutil/trace.go:172","msg":"trace[1126989899] transaction","detail":"{read_only:false; response_revision:650; number_of_response:1; }","duration":"150.856557ms","start":"2025-11-08T09:18:03.158535Z","end":"2025-11-08T09:18:03.309391Z","steps":["trace[1126989899] 'process raft request'  (duration: 125.638507ms)","trace[1126989899] 'compare'  (duration: 24.942951ms)"],"step_count":2}
	{"level":"info","ts":"2025-11-08T09:18:03.678906Z","caller":"traceutil/trace.go:172","msg":"trace[637832210] linearizableReadLoop","detail":"{readStateIndex:686; appliedIndex:686; }","duration":"105.935442ms","start":"2025-11-08T09:18:03.572929Z","end":"2025-11-08T09:18:03.678865Z","steps":["trace[637832210] 'read index received'  (duration: 105.925694ms)","trace[637832210] 'applied index is now lower than readState.Index'  (duration: 7.836µs)"],"step_count":2}
	{"level":"warn","ts":"2025-11-08T09:18:03.679146Z","caller":"txn/util.go:93","msg":"apply request took too long","took":"106.196421ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/coredns-66bc5c9577-x49dj\" limit:1 ","response":"range_response_count:1 size:5944"}
	{"level":"info","ts":"2025-11-08T09:18:03.679193Z","caller":"traceutil/trace.go:172","msg":"trace[455745962] range","detail":"{range_begin:/registry/pods/kube-system/coredns-66bc5c9577-x49dj; range_end:; response_count:1; response_revision:650; }","duration":"106.26111ms","start":"2025-11-08T09:18:03.572920Z","end":"2025-11-08T09:18:03.679181Z","steps":["trace[455745962] 'agreement among raft nodes before linearized reading'  (duration: 106.076564ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.679221Z","caller":"traceutil/trace.go:172","msg":"trace[1166764285] transaction","detail":"{read_only:false; response_revision:651; number_of_response:1; }","duration":"107.436536ms","start":"2025-11-08T09:18:03.571769Z","end":"2025-11-08T09:18:03.679205Z","steps":["trace[1166764285] 'process raft request'  (duration: 107.21514ms)"],"step_count":1}
	{"level":"info","ts":"2025-11-08T09:18:03.742221Z","caller":"traceutil/trace.go:172","msg":"trace[604594362] transaction","detail":"{read_only:false; response_revision:652; number_of_response:1; }","duration":"167.19365ms","start":"2025-11-08T09:18:03.575007Z","end":"2025-11-08T09:18:03.742201Z","steps":["trace[604594362] 'process raft request'  (duration: 166.94534ms)"],"step_count":1}
	
	
	==> kernel <==
	 09:18:30 up  1:01,  0 user,  load average: 3.51, 3.84, 2.60
	Linux default-k8s-diff-port-677902 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [590afcaf8e89deeaaa4713575931b18d68731c33427658709a62d54a4119328c] <==
	I1108 09:17:32.929684       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:17:32.929916       1 main.go:139] hostIP = 192.168.76.2
	podIP = 192.168.76.2
	I1108 09:17:32.930072       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:17:32.930090       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:17:32.930111       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:17:33Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:17:33.229215       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:17:33.229304       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:17:33.229369       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:17:33.229533       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	E1108 09:18:03.230536       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.96.0.1:443/api/v1/nodes?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Node"
	E1108 09:18:03.230565       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Pod: Get \"https://10.96.0.1:443/api/v1/pods?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Pod"
	E1108 09:18:03.230571       1 reflector.go:200] "Failed to watch" err="failed to list *v1.Namespace: Get \"https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.Namespace"
	E1108 09:18:03.230531       1 reflector.go:200] "Failed to watch" err="failed to list *v1.NetworkPolicy: Get \"https://10.96.0.1:443/apis/networking.k8s.io/v1/networkpolicies?limit=500&resourceVersion=0\": dial tcp 10.96.0.1:443: i/o timeout" logger="UnhandledError" reflector="pkg/mod/k8s.io/client-go@v0.33.0/tools/cache/reflector.go:285" type="*v1.NetworkPolicy"
	I1108 09:18:04.929634       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:18:04.929672       1 metrics.go:72] Registering metrics
	I1108 09:18:04.929748       1 controller.go:711] "Syncing nftables rules"
	I1108 09:18:13.229362       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:18:13.229408       1 main.go:301] handling current node
	I1108 09:18:23.236564       1 main.go:297] Handling node with IPs: map[192.168.76.2:{}]
	I1108 09:18:23.236599       1 main.go:301] handling current node
	
	
	==> kube-apiserver [88d1ed66cd10fabadec706e16daeed92054907f0bc41e88565bedf15be0d58f1] <==
	I1108 09:17:32.035317       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:17:32.035323       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:17:32.035547       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I1108 09:17:32.035551       1 cache.go:39] Caches are synced for LocalAvailability controller
	I1108 09:17:32.035638       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:17:32.035741       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:17:32.035643       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I1108 09:17:32.036335       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:17:32.036423       1 policy_source.go:240] refreshing policies
	I1108 09:17:32.042806       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I1108 09:17:32.050818       1 cidrallocator.go:301] created ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:17:32.056527       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:17:32.067094       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:17:32.067156       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:17:32.288402       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:17:32.319651       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:17:32.340068       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:17:32.348752       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:17:32.356538       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:17:32.390558       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.111.55.128"}
	I1108 09:17:32.400623       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.98.126.252"}
	I1108 09:17:32.938162       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:17:35.540203       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:17:35.787822       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:17:35.837732       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [8193c98b4facb0289f0fb5b3b07a5310c99aeb35f978c578657a4bac437665cc] <==
	I1108 09:17:35.347171       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I1108 09:17:35.384522       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:17:35.385531       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:17:35.385544       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:17:35.385572       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:17:35.385606       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:17:35.385642       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:17:35.385708       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:17:35.385727       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:17:35.385731       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:17:35.386336       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:17:35.386403       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:17:35.387702       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:17:35.388853       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:17:35.388921       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:17:35.388990       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="default-k8s-diff-port-677902"
	I1108 09:17:35.389026       1 node_lifecycle_controller.go:1067] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1108 09:17:35.391245       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:17:35.391334       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:17:35.392333       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:17:35.394138       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:17:35.396338       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:17:35.401626       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:17:35.403906       1 shared_informer.go:356] "Caches are synced" controller="attach detach"
	I1108 09:17:35.410217       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [336544864c96dae8947ba947a1054111663f204edc02d625ea55a7b4ec6f4882] <==
	I1108 09:17:32.844589       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:17:32.924137       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:17:33.024245       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:17:33.025530       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.76.2"]
	E1108 09:17:33.025669       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:17:33.048429       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:17:33.048498       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:17:33.054750       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:17:33.055317       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:17:33.055358       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:33.058228       1 config.go:200] "Starting service config controller"
	I1108 09:17:33.058251       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:17:33.058501       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:17:33.058572       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:17:33.058490       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:17:33.058652       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:17:33.058744       1 config.go:309] "Starting node config controller"
	I1108 09:17:33.058753       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:17:33.058761       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:17:33.158418       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:17:33.158631       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:17:33.158723       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [3ce4807537535f6b9273f3782b3ca29c1e56532974e2869bca7e6b7057e45242] <==
	I1108 09:17:30.481653       1 serving.go:386] Generated self-signed cert in-memory
	I1108 09:17:32.009454       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:17:32.009486       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:17:32.014530       1 requestheader_controller.go:180] Starting RequestHeaderAuthRequestController
	I1108 09:17:32.014569       1 shared_informer.go:349] "Waiting for caches to sync" controller="RequestHeaderAuthRequestController"
	I1108 09:17:32.014568       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:17:32.014565       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.014593       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I1108 09:17:32.014599       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.014910       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:17:32.014932       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:17:32.115716       1 shared_informer.go:356] "Caches are synced" controller="RequestHeaderAuthRequestController"
	I1108 09:17:32.115737       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:17:32.115716       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:17:35 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:35.985145     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgpjs\" (UniqueName: \"kubernetes.io/projected/ac0f9c47-0b03-4970-aa59-3a5c15e3435d-kube-api-access-xgpjs\") pod \"kubernetes-dashboard-855c9754f9-tzbhn\" (UID: \"ac0f9c47-0b03-4970-aa59-3a5c15e3435d\") " pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tzbhn"
	Nov 08 09:17:35 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:35.985320     726 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcdv5\" (UniqueName: \"kubernetes.io/projected/6a00085b-d40d-40c1-8ce5-957bb382f725-kube-api-access-jcdv5\") pod \"dashboard-metrics-scraper-6ffb444bf9-kht2x\" (UID: \"6a00085b-d40d-40c1-8ce5-957bb382f725\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x"
	Nov 08 09:17:39 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:39.490452     726 scope.go:117] "RemoveContainer" containerID="1e754fbb5c5abc613e21f513108a486c674a8f708ec39c75c74e52b92d8b9da5"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:40.495908     726 scope.go:117] "RemoveContainer" containerID="1e754fbb5c5abc613e21f513108a486c674a8f708ec39c75c74e52b92d8b9da5"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:40.496195     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:40 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:40.496415     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.451326     726 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.500638     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:41.500842     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:41 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:41.520562     726 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-855c9754f9-tzbhn" podStartSLOduration=1.324542048 podStartE2EDuration="6.5205365s" podCreationTimestamp="2025-11-08 09:17:35 +0000 UTC" firstStartedPulling="2025-11-08 09:17:36.24105159 +0000 UTC m=+6.903517965" lastFinishedPulling="2025-11-08 09:17:41.437046035 +0000 UTC m=+12.099512417" observedRunningTime="2025-11-08 09:17:41.520362599 +0000 UTC m=+12.182828995" watchObservedRunningTime="2025-11-08 09:17:41.5205365 +0000 UTC m=+12.183002896"
	Nov 08 09:17:44 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:44.494743     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:44 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:44.494990     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:17:57 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:57.439392     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:58.548653     726 scope.go:117] "RemoveContainer" containerID="f88d0e4d41fb65a0dfdb5abad7a208fb0af4e602e991c0021de4c9bef9ee3763"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: I1108 09:17:58.549056     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:17:58 default-k8s-diff-port-677902 kubelet[726]: E1108 09:17:58.549253     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:03 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:03.567487     726 scope.go:117] "RemoveContainer" containerID="fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01"
	Nov 08 09:18:04 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:04.495577     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:18:04 default-k8s-diff-port-677902 kubelet[726]: E1108 09:18:04.495803     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:17 default-k8s-diff-port-677902 kubelet[726]: I1108 09:18:17.439337     726 scope.go:117] "RemoveContainer" containerID="d8a9d5717ea563768371ab0eba5575a49473399115e4cfe41efa2b2a3ac3b88d"
	Nov 08 09:18:17 default-k8s-diff-port-677902 kubelet[726]: E1108 09:18:17.439579     726 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dashboard-metrics-scraper\" with CrashLoopBackOff: \"back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-6ffb444bf9-kht2x_kubernetes-dashboard(6a00085b-d40d-40c1-8ce5-957bb382f725)\"" pod="kubernetes-dashboard/dashboard-metrics-scraper-6ffb444bf9-kht2x" podUID="6a00085b-d40d-40c1-8ce5-957bb382f725"
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	Nov 08 09:18:25 default-k8s-diff-port-677902 systemd[1]: kubelet.service: Consumed 1.817s CPU time.
	
	
	==> kubernetes-dashboard [16b4255c8b0018ceca41bb41578fbe85e3341bfcaf4230bca79e8e26c1057dcd] <==
	2025/11/08 09:17:41 Starting overwatch
	2025/11/08 09:17:41 Using namespace: kubernetes-dashboard
	2025/11/08 09:17:41 Using in-cluster config to connect to apiserver
	2025/11/08 09:17:41 Using secret token for csrf signing
	2025/11/08 09:17:41 Initializing csrf token from kubernetes-dashboard-csrf secret
	2025/11/08 09:17:41 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2025/11/08 09:17:41 Successful initial request to the apiserver, version: v1.34.1
	2025/11/08 09:17:41 Generating JWE encryption key
	2025/11/08 09:17:41 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2025/11/08 09:17:41 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2025/11/08 09:17:41 Initializing JWE encryption key from synchronized object
	2025/11/08 09:17:41 Creating in-cluster Sidecar client
	2025/11/08 09:17:41 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2025/11/08 09:17:41 Serving insecurely on HTTP port: 9090
	2025/11/08 09:18:11 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	
	
	==> storage-provisioner [fd6ee9dfcc1f242da3292ca58172aeedeb98f8530aeb0b82cab2abcd4f728f01] <==
	I1108 09:17:32.809728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F1108 09:18:02.812370       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [ff736adc69a2ba08ba435939276954a10d8bb4936138ff2d8319e50d8f11020e] <==
	I1108 09:18:03.820060       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1108 09:18:03.827812       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1108 09:18:03.827865       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	W1108 09:18:03.847792       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:07.303318       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:11.563874       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:15.162658       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:18.216616       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.238879       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.244407       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:18:21.244559       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1108 09:18:21.244644       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"7a8f9c03-6b30-4ca5-a9cb-a97fbf27f9a3", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd became leader
	I1108 09:18:21.244714       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd!
	W1108 09:18:21.249315       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:21.252726       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	I1108 09:18:21.345593       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-677902_601d6368-7210-4e3b-88b5-d2c4956566cd!
	W1108 09:18:23.256230       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:23.262073       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:25.265945       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:25.270227       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:27.272969       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:27.276922       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:29.280930       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	W1108 09:18:29.285268       1 warnings.go:70] v1 Endpoints is deprecated in v1.33+; use discovery.k8s.io/v1 EndpointSlice
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902: exit status 2 (344.604273ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:293: <<< TestStartStop/group/default-k8s-diff-port/serial/Pause FAILED: end of post-mortem logs <<<
helpers_test.go:294: ---------------------/post-mortem---------------------------------
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Pause (6.29s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (252.583858ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:205: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-620528
helpers_test.go:243: (dbg) docker inspect newest-cni-620528:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	        "Created": "2025-11-08T09:18:04.364605976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 327975,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:18:04.407965326Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hosts",
	        "LogPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9-json.log",
	        "Name": "/newest-cni-620528",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-620528:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-620528",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	                "LowerDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-620528",
	                "Source": "/var/lib/docker/volumes/newest-cni-620528/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-620528",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-620528",
	                "name.minikube.sigs.k8s.io": "newest-cni-620528",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7b437836591cd5d3aab9f762acc46680a4d534b97a82018e643fabf437fb2b23",
	            "SandboxKey": "/var/run/docker/netns/7b437836591c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33129"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33130"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33133"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33131"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33132"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-620528": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "c6:93:bb:dc:5b:c4",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92c9b3581086a3ec71939baea725cf0a225bd4e6d308483c2f50dd74f662a243",
	                    "EndpointID": "1c208fd5c4660c45cc900be9cab86de5017bbb12475b26481a6cfdf08bf8ae86",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-620528",
	                        "e2bd4d8f6d3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/EnableAddonWhileActive FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-620528 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ start   │ -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0 │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                 │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                        │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:16 UTC │
	│ start   │ -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                                       │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:16 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable metrics-server -p default-k8s-diff-port-677902 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                            │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ stop    │ -p default-k8s-diff-port-677902 --alsologtostderr -v=3                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-677902 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-677902 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:17:58
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:17:58.478924  325211 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:17:58.479071  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479083  325211 out.go:374] Setting ErrFile to fd 2...
	I1108 09:17:58.479096  325211 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:17:58.479366  325211 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:17:58.479861  325211 out.go:368] Setting JSON to false
	I1108 09:17:58.481212  325211 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3629,"bootTime":1762589849,"procs":328,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:17:58.481320  325211 start.go:143] virtualization: kvm guest
	I1108 09:17:58.483829  325211 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:17:58.485799  325211 notify.go:221] Checking for updates...
	I1108 09:17:58.485811  325211 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:17:58.487583  325211 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:17:58.489038  325211 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:17:58.490367  325211 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:17:58.491457  325211 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:17:58.492651  325211 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:17:58.494295  325211 config.go:182] Loaded profile config "default-k8s-diff-port-677902": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494419  325211 config.go:182] Loaded profile config "embed-certs-271910": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494527  325211 config.go:182] Loaded profile config "no-preload-220714": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:17:58.494637  325211 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:17:58.521877  325211 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:17:58.522010  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.588747  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.576854709 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.588862  325211 docker.go:319] overlay module found
	I1108 09:17:58.590962  325211 out.go:179] * Using the docker driver based on user configuration
	I1108 09:17:58.592340  325211 start.go:309] selected driver: docker
	I1108 09:17:58.592358  325211 start.go:930] validating driver "docker" against <nil>
	I1108 09:17:58.592371  325211 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:17:58.593036  325211 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:17:58.659441  325211 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:64 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 09:17:58.646701871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:17:58.659624  325211 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	W1108 09:17:58.659658  325211 out.go:285] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I1108 09:17:58.659915  325211 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:17:58.662513  325211 out.go:179] * Using Docker driver with root privileges
	I1108 09:17:58.663816  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:17:58.663873  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:17:58.663883  325211 start_flags.go:336] Found "CNI" CNI - setting NetworkPlugin=cni
	I1108 09:17:58.663955  325211 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnet
ClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:17:58.665267  325211 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:17:58.666553  325211 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:17:58.667895  325211 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:17:58.669060  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:58.669119  325211 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:17:58.669133  325211 cache.go:59] Caching tarball of preloaded images
	I1108 09:17:58.669179  325211 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:17:58.669265  325211 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:17:58.669277  325211 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:17:58.669428  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:17:58.669460  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json: {Name:mk81817e2e19a8fdfa1ca2cba702e48d1cb06c89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:17:58.692744  325211 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:17:58.692762  325211 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:17:58.692786  325211 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:17:58.692814  325211 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:17:58.692902  325211 start.go:364] duration metric: took 71.682µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:17:58.692929  325211 start.go:93] Provisioning new machine with config: &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServer
Name:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fals
e DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:17:58.693004  325211 start.go:125] createHost starting for "" (driver="docker")
	W1108 09:18:00.076917  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:02.690159  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:17:58.696492  325211 out.go:252] * Creating docker container (CPUs=2, Memory=3072MB) ...
	I1108 09:17:58.696765  325211 start.go:159] libmachine.API.Create for "newest-cni-620528" (driver="docker")
	I1108 09:17:58.696803  325211 client.go:173] LocalClient.Create starting
	I1108 09:17:58.696917  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem
	I1108 09:17:58.696958  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.696982  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697061  325211 main.go:143] libmachine: Reading certificate data from /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem
	I1108 09:17:58.697100  325211 main.go:143] libmachine: Decoding PEM data...
	I1108 09:17:58.697116  325211 main.go:143] libmachine: Parsing certificate...
	I1108 09:17:58.697562  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1108 09:17:58.717266  325211 cli_runner.go:211] docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1108 09:17:58.717347  325211 network_create.go:284] running [docker network inspect newest-cni-620528] to gather additional debugging logs...
	I1108 09:17:58.717379  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528
	W1108 09:17:58.736456  325211 cli_runner.go:211] docker network inspect newest-cni-620528 returned with exit code 1
	I1108 09:17:58.736492  325211 network_create.go:287] error running [docker network inspect newest-cni-620528]: docker network inspect newest-cni-620528: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network newest-cni-620528 not found
	I1108 09:17:58.736508  325211 network_create.go:289] output of [docker network inspect newest-cni-620528]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network newest-cni-620528 not found
	
	** /stderr **
	I1108 09:17:58.736599  325211 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:17:58.758028  325211 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
	I1108 09:17:58.758799  325211 network.go:211] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName:br-69402498439f IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:7a:64:3c:58:48:b9} reservation:<nil>}
	I1108 09:17:58.759757  325211 network.go:211] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 IsPrivate:true Interface:{IfaceName:br-11dfd15cc420 IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:22:1d:c0:7a:ca:31} reservation:<nil>}
	I1108 09:17:58.760782  325211 network.go:211] skipping subnet 192.168.76.0/24 that is taken: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 IsPrivate:true Interface:{IfaceName:br-3530cc966e77 IfaceIPv4:192.168.76.1 IfaceMTU:1500 IfaceMAC:1e:ab:9a:62:0b:ef} reservation:<nil>}
	I1108 09:17:58.761727  325211 network.go:211] skipping subnet 192.168.85.0/24 that is taken: &{IP:192.168.85.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.85.0/24 Gateway:192.168.85.1 ClientMin:192.168.85.2 ClientMax:192.168.85.254 Broadcast:192.168.85.255 IsPrivate:true Interface:{IfaceName:br-ea0d0f62e0b2 IfaceIPv4:192.168.85.1 IfaceMTU:1500 IfaceMAC:06:91:c3:f9:f2:45} reservation:<nil>}
	I1108 09:17:58.762519  325211 network.go:211] skipping subnet 192.168.94.0/24 that is taken: &{IP:192.168.94.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.94.0/24 Gateway:192.168.94.1 ClientMin:192.168.94.2 ClientMax:192.168.94.254 Broadcast:192.168.94.255 IsPrivate:true Interface:{IfaceName:br-d2c6206fd833 IfaceIPv4:192.168.94.1 IfaceMTU:1500 IfaceMAC:02:72:29:08:bd:5d} reservation:<nil>}
	I1108 09:17:58.764114  325211 network.go:206] using free private subnet 192.168.103.0/24: &{IP:192.168.103.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.103.0/24 Gateway:192.168.103.1 ClientMin:192.168.103.2 ClientMax:192.168.103.254 Broadcast:192.168.103.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001f8c0d0}
	I1108 09:17:58.764142  325211 network_create.go:124] attempt to create docker network newest-cni-620528 192.168.103.0/24 with gateway 192.168.103.1 and MTU of 1500 ...
	I1108 09:17:58.764193  325211 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.103.0/24 --gateway=192.168.103.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=newest-cni-620528 newest-cni-620528
	I1108 09:17:58.832507  325211 network_create.go:108] docker network newest-cni-620528 192.168.103.0/24 created
	I1108 09:17:58.832544  325211 kic.go:121] calculated static IP "192.168.103.2" for the "newest-cni-620528" container
	I1108 09:17:58.832610  325211 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1108 09:17:58.853554  325211 cli_runner.go:164] Run: docker volume create newest-cni-620528 --label name.minikube.sigs.k8s.io=newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true
	I1108 09:17:58.877252  325211 oci.go:103] Successfully created a docker volume newest-cni-620528
	I1108 09:17:58.877433  325211 cli_runner.go:164] Run: docker run --rm --name newest-cni-620528-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --entrypoint /usr/bin/test -v newest-cni-620528:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -d /var/lib
	I1108 09:17:59.367458  325211 oci.go:107] Successfully prepared a docker volume newest-cni-620528
	I1108 09:17:59.367498  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:17:59.367522  325211 kic.go:194] Starting extracting preloaded images to volume ...
	I1108 09:17:59.367593  325211 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir
	W1108 09:18:05.076934  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	W1108 09:18:07.078212  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:04.272478  325211 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v newest-cni-620528:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 -I lz4 -xf /preloaded.tar -C /extractDir: (4.904840042s)
	I1108 09:18:04.272514  325211 kic.go:203] duration metric: took 4.90498935s to extract preloaded images to volume ...
	W1108 09:18:04.272612  325211 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W1108 09:18:04.272742  325211 oci.go:252] Your kernel does not support CPU cfs period/quota or the cgroup is not mounted.
	I1108 09:18:04.272940  325211 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1108 09:18:04.343948  325211 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname newest-cni-620528 --name newest-cni-620528 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=newest-cni-620528 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=newest-cni-620528 --network newest-cni-620528 --ip 192.168.103.2 --volume newest-cni-620528:/var --security-opt apparmor=unconfined --memory=3072mb -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1
	I1108 09:18:04.742474  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Running}}
	I1108 09:18:04.764312  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:04.784485  325211 cli_runner.go:164] Run: docker exec newest-cni-620528 stat /var/lib/dpkg/alternatives/iptables
	I1108 09:18:04.838693  325211 oci.go:144] the created container "newest-cni-620528" has a running status.
	I1108 09:18:04.838725  325211 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa...
	I1108 09:18:05.369787  325211 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1108 09:18:05.457128  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.479326  325211 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1108 09:18:05.479354  325211 kic_runner.go:114] Args: [docker exec --privileged newest-cni-620528 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1108 09:18:05.539352  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:05.562723  325211 machine.go:94] provisionDockerMachine start ...
	I1108 09:18:05.562853  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.583585  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.583921  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.583937  325211 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:18:05.727446  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.727474  325211 ubuntu.go:182] provisioning hostname "newest-cni-620528"
	I1108 09:18:05.727531  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.746860  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.747202  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.747227  325211 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-620528 && echo "newest-cni-620528" | sudo tee /etc/hostname
	I1108 09:18:05.888726  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:05.888814  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:05.908669  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:05.908892  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:05.908930  325211 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-620528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-620528/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-620528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:18:06.037040  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:18:06.037068  325211 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:18:06.037142  325211 ubuntu.go:190] setting up certificates
	I1108 09:18:06.037152  325211 provision.go:84] configureAuth start
	I1108 09:18:06.037215  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:06.055504  325211 provision.go:143] copyHostCerts
	I1108 09:18:06.055556  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:18:06.055570  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:18:06.055648  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:18:06.055756  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:18:06.055768  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:18:06.055809  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:18:06.055888  325211 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:18:06.055898  325211 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:18:06.055933  325211 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:18:06.056003  325211 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-620528 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-620528]
	I1108 09:18:06.537976  325211 provision.go:177] copyRemoteCerts
	I1108 09:18:06.538036  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:18:06.538071  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.557256  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:06.654533  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:18:06.676656  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:18:06.695147  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:18:06.716798  325211 provision.go:87] duration metric: took 679.62911ms to configureAuth
	I1108 09:18:06.716829  325211 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:18:06.717067  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:06.717198  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:06.738275  325211 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:06.738563  325211 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33129 <nil> <nil>}
	I1108 09:18:06.738581  325211 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:18:06.981160  325211 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:18:06.981185  325211 machine.go:97] duration metric: took 1.418436634s to provisionDockerMachine
	I1108 09:18:06.981197  325211 client.go:176] duration metric: took 8.28438328s to LocalClient.Create
	I1108 09:18:06.981213  325211 start.go:167] duration metric: took 8.284449883s to libmachine.API.Create "newest-cni-620528"
	I1108 09:18:06.981223  325211 start.go:293] postStartSetup for "newest-cni-620528" (driver="docker")
	I1108 09:18:06.981235  325211 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:18:06.981314  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:18:06.981372  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.002647  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.105621  325211 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:18:07.109460  325211 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:18:07.109484  325211 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:18:07.109499  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:18:07.109560  325211 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:18:07.109672  325211 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:18:07.109799  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:18:07.117996  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:07.140135  325211 start.go:296] duration metric: took 158.897937ms for postStartSetup
	I1108 09:18:07.140513  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.161877  325211 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:07.162158  325211 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:18:07.162210  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.180553  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.271941  325211 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:18:07.276948  325211 start.go:128] duration metric: took 8.583931143s to createHost
	I1108 09:18:07.276971  325211 start.go:83] releasing machines lock for "newest-cni-620528", held for 8.584057332s
	I1108 09:18:07.277031  325211 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:07.295640  325211 ssh_runner.go:195] Run: cat /version.json
	I1108 09:18:07.295700  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.295708  325211 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:18:07.295767  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:07.316331  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.318970  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:07.462968  325211 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:07.470084  325211 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:18:07.506884  325211 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:18:07.511834  325211 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:18:07.511901  325211 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:18:07.550104  325211 cni.go:262] disabled [/etc/cni/net.d/10-crio-bridge.conflist.disabled, /etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I1108 09:18:07.550130  325211 start.go:496] detecting cgroup driver to use...
	I1108 09:18:07.550167  325211 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:18:07.550207  325211 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:18:07.568646  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:18:07.581696  325211 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:18:07.581749  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:18:07.598216  325211 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:18:07.615476  325211 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:18:07.707144  325211 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:18:07.802881  325211 docker.go:234] disabling docker service ...
	I1108 09:18:07.802943  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:18:07.822170  325211 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:18:07.836245  325211 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:18:07.933480  325211 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:18:08.019451  325211 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:18:08.034231  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:18:08.048749  325211 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:18:08.048808  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.061998  325211 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:18:08.062059  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.072440  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.082524  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.092024  325211 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:18:08.100534  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.110621  325211 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.124570  325211 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:08.133373  325211 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:18:08.140578  325211 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:18:08.147929  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.225503  325211 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:18:08.341819  325211 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:18:08.341873  325211 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:18:08.345953  325211 start.go:564] Will wait 60s for crictl version
	I1108 09:18:08.346005  325211 ssh_runner.go:195] Run: which crictl
	I1108 09:18:08.349629  325211 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:18:08.373232  325211 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:18:08.373330  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.401094  325211 ssh_runner.go:195] Run: crio --version
	I1108 09:18:08.430369  325211 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:18:08.431733  325211 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:18:08.449726  325211 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:18:08.453798  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.465344  325211 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:18:08.466743  325211 kubeadm.go:884] updating cluster {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disa
bleMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:18:08.466899  325211 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:08.466970  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	W1108 09:18:09.576395  318772 pod_ready.go:104] pod "coredns-66bc5c9577-x49dj" is not "Ready", error: <nil>
	I1108 09:18:11.576747  318772 pod_ready.go:94] pod "coredns-66bc5c9577-x49dj" is "Ready"
	I1108 09:18:11.576778  318772 pod_ready.go:86] duration metric: took 38.005451155s for pod "coredns-66bc5c9577-x49dj" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.579411  318772 pod_ready.go:83] waiting for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.583270  318772 pod_ready.go:94] pod "etcd-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.583301  318772 pod_ready.go:86] duration metric: took 3.867249ms for pod "etcd-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.585244  318772 pod_ready.go:83] waiting for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.588870  318772 pod_ready.go:94] pod "kube-apiserver-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.588894  318772 pod_ready.go:86] duration metric: took 3.627506ms for pod "kube-apiserver-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.590818  318772 pod_ready.go:83] waiting for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.775767  318772 pod_ready.go:94] pod "kube-controller-manager-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:11.775796  318772 pod_ready.go:86] duration metric: took 184.958059ms for pod "kube-controller-manager-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:11.976038  318772 pod_ready.go:83] waiting for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.376301  318772 pod_ready.go:94] pod "kube-proxy-5d9f2" is "Ready"
	I1108 09:18:12.376329  318772 pod_ready.go:86] duration metric: took 400.26953ms for pod "kube-proxy-5d9f2" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.575624  318772 pod_ready.go:83] waiting for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975734  318772 pod_ready.go:94] pod "kube-scheduler-default-k8s-diff-port-677902" is "Ready"
	I1108 09:18:12.975759  318772 pod_ready.go:86] duration metric: took 400.106156ms for pod "kube-scheduler-default-k8s-diff-port-677902" in "kube-system" namespace to be "Ready" or be gone ...
	I1108 09:18:12.975771  318772 pod_ready.go:40] duration metric: took 39.407892943s for extra waiting for all "kube-system" pods having one of [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] labels to be "Ready" ...
	I1108 09:18:13.020618  318772 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:13.022494  318772 out.go:179] * Done! kubectl is now configured to use "default-k8s-diff-port-677902" cluster and "default" namespace by default
	I1108 09:18:08.499601  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.499621  325211 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:18:08.499662  325211 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:08.525110  325211 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:08.525134  325211 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:18:08.525142  325211 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:18:08.525219  325211 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-620528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:18:08.525313  325211 ssh_runner.go:195] Run: crio config
	I1108 09:18:08.573327  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:08.573352  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:08.573372  325211 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:18:08.573400  325211 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-620528 NodeName:newest-cni-620528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:18:08.573547  325211 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-620528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:18:08.573618  325211 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:18:08.582404  325211 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:18:08.582472  325211 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:18:08.590616  325211 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:18:08.603619  325211 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:18:08.618758  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:18:08.631660  325211 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:18:08.635374  325211 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:08.645241  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:08.724266  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:08.747748  325211 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528 for IP: 192.168.103.2
	I1108 09:18:08.747771  325211 certs.go:195] generating shared ca certs ...
	I1108 09:18:08.747792  325211 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.747940  325211 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:18:08.748002  325211 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:18:08.748015  325211 certs.go:257] generating profile certs ...
	I1108 09:18:08.748090  325211 certs.go:364] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key
	I1108 09:18:08.748113  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt with IP's: []
	I1108 09:18:08.887418  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt ...
	I1108 09:18:08.887453  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.crt: {Name:mkef0a2461081e915a23a94a0dff129a9bbd1497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887643  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key ...
	I1108 09:18:08.887659  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key: {Name:mka694d89084bd9f4458105a6c692b710fbbc73c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:08.887768  325211 certs.go:364] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34
	I1108 09:18:08.887787  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.103.2]
	I1108 09:18:09.159862  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 ...
	I1108 09:18:09.159894  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34: {Name:mke1ad44d78f87b88058a3d23ddbc317f0d1879b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160086  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 ...
	I1108 09:18:09.160102  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34: {Name:mka8bc3506ee0b2250d13ad586c09c6d85151fe9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.160232  325211 certs.go:382] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt
	I1108 09:18:09.160351  325211 certs.go:386] copying /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34 -> /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key
	I1108 09:18:09.160445  325211 certs.go:364] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key
	I1108 09:18:09.160467  325211 crypto.go:68] Generating cert /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt with IP's: []
	I1108 09:18:09.384382  325211 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt ...
	I1108 09:18:09.384416  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt: {Name:mk66386520822ac037714f942e30945bee483e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384603  325211 crypto.go:164] Writing key to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key ...
	I1108 09:18:09.384629  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key: {Name:mk05f803707b48c031dab80c2b264c81f772d955 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:09.384853  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:18:09.384902  325211 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:18:09.384914  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:18:09.384954  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:18:09.384988  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:18:09.385020  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:18:09.385082  325211 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:09.385692  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:18:09.404511  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:18:09.421750  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:18:09.438836  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:18:09.457312  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:18:09.475401  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:18:09.493660  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:18:09.511469  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:18:09.529325  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:18:09.548820  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:18:09.568542  325211 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:18:09.587025  325211 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:18:09.599630  325211 ssh_runner.go:195] Run: openssl version
	I1108 09:18:09.605604  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:18:09.613542  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617120  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.617172  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:18:09.651950  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:18:09.660859  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:18:09.669386  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673162  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.673215  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:18:09.708114  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:18:09.716962  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:18:09.725461  325211 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729093  325211 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.729148  325211 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:09.762764  325211 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:18:09.771470  325211 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:18:09.775240  325211 certs.go:400] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1108 09:18:09.775313  325211 kubeadm.go:401] StartCluster: {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false Disable
Metrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:09.775379  325211 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:09.775419  325211 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:09.802548  325211 cri.go:89] found id: ""
	I1108 09:18:09.802614  325211 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:18:09.810703  325211 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1108 09:18:09.818391  325211 kubeadm.go:215] ignoring SystemVerification for kubeadm because of docker driver
	I1108 09:18:09.818434  325211 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1108 09:18:09.825944  325211 kubeadm.go:156] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1108 09:18:09.825965  325211 kubeadm.go:158] found existing configuration files:
	
	I1108 09:18:09.826003  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1108 09:18:09.833772  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1108 09:18:09.833821  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1108 09:18:09.840883  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1108 09:18:09.848092  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1108 09:18:09.848152  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1108 09:18:09.855208  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.862522  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1108 09:18:09.862577  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1108 09:18:09.869810  325211 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1108 09:18:09.877264  325211 kubeadm.go:164] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1108 09:18:09.877332  325211 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1108 09:18:09.884880  325211 ssh_runner.go:286] Start: sudo /bin/bash -c "env PATH="/var/lib/minikube/binaries/v1.34.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1108 09:18:09.944123  325211 kubeadm.go:319] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/6.8.0-1043-gcp\n", err: exit status 1
	I1108 09:18:10.005908  325211 kubeadm.go:319] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1108 09:18:21.410632  325211 kubeadm.go:319] [init] Using Kubernetes version: v1.34.1
	I1108 09:18:21.410734  325211 kubeadm.go:319] [preflight] Running pre-flight checks
	I1108 09:18:21.410861  325211 kubeadm.go:319] [preflight] The system verification failed. Printing the output from the verification:
	I1108 09:18:21.410921  325211 kubeadm.go:319] KERNEL_VERSION: 6.8.0-1043-gcp
	I1108 09:18:21.410961  325211 kubeadm.go:319] OS: Linux
	I1108 09:18:21.411005  325211 kubeadm.go:319] CGROUPS_CPU: enabled
	I1108 09:18:21.411051  325211 kubeadm.go:319] CGROUPS_CPUSET: enabled
	I1108 09:18:21.411093  325211 kubeadm.go:319] CGROUPS_DEVICES: enabled
	I1108 09:18:21.411168  325211 kubeadm.go:319] CGROUPS_FREEZER: enabled
	I1108 09:18:21.411220  325211 kubeadm.go:319] CGROUPS_MEMORY: enabled
	I1108 09:18:21.411259  325211 kubeadm.go:319] CGROUPS_PIDS: enabled
	I1108 09:18:21.411331  325211 kubeadm.go:319] CGROUPS_HUGETLB: enabled
	I1108 09:18:21.411374  325211 kubeadm.go:319] CGROUPS_IO: enabled
	I1108 09:18:21.411467  325211 kubeadm.go:319] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1108 09:18:21.411552  325211 kubeadm.go:319] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1108 09:18:21.411625  325211 kubeadm.go:319] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1108 09:18:21.411684  325211 kubeadm.go:319] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1108 09:18:21.413538  325211 out.go:252]   - Generating certificates and keys ...
	I1108 09:18:21.413609  325211 kubeadm.go:319] [certs] Using existing ca certificate authority
	I1108 09:18:21.413671  325211 kubeadm.go:319] [certs] Using existing apiserver certificate and key on disk
	I1108 09:18:21.413729  325211 kubeadm.go:319] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1108 09:18:21.413779  325211 kubeadm.go:319] [certs] Generating "front-proxy-ca" certificate and key
	I1108 09:18:21.413829  325211 kubeadm.go:319] [certs] Generating "front-proxy-client" certificate and key
	I1108 09:18:21.413879  325211 kubeadm.go:319] [certs] Generating "etcd/ca" certificate and key
	I1108 09:18:21.413930  325211 kubeadm.go:319] [certs] Generating "etcd/server" certificate and key
	I1108 09:18:21.414043  325211 kubeadm.go:319] [certs] etcd/server serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414143  325211 kubeadm.go:319] [certs] Generating "etcd/peer" certificate and key
	I1108 09:18:21.414357  325211 kubeadm.go:319] [certs] etcd/peer serving cert is signed for DNS names [localhost newest-cni-620528] and IPs [192.168.103.2 127.0.0.1 ::1]
	I1108 09:18:21.414461  325211 kubeadm.go:319] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1108 09:18:21.414548  325211 kubeadm.go:319] [certs] Generating "apiserver-etcd-client" certificate and key
	I1108 09:18:21.414613  325211 kubeadm.go:319] [certs] Generating "sa" key and public key
	I1108 09:18:21.414686  325211 kubeadm.go:319] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1108 09:18:21.414762  325211 kubeadm.go:319] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1108 09:18:21.414828  325211 kubeadm.go:319] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1108 09:18:21.414892  325211 kubeadm.go:319] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1108 09:18:21.414984  325211 kubeadm.go:319] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1108 09:18:21.415066  325211 kubeadm.go:319] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1108 09:18:21.415150  325211 kubeadm.go:319] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1108 09:18:21.415209  325211 kubeadm.go:319] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1108 09:18:21.416674  325211 out.go:252]   - Booting up control plane ...
	I1108 09:18:21.416750  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1108 09:18:21.416832  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1108 09:18:21.416900  325211 kubeadm.go:319] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1108 09:18:21.416989  325211 kubeadm.go:319] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1108 09:18:21.417064  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/instance-config.yaml"
	I1108 09:18:21.417169  325211 kubeadm.go:319] [patches] Applied patch of type "application/strategic-merge-patch+json" to target "kubeletconfiguration"
	I1108 09:18:21.417246  325211 kubeadm.go:319] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1108 09:18:21.417298  325211 kubeadm.go:319] [kubelet-start] Starting the kubelet
	I1108 09:18:21.417432  325211 kubeadm.go:319] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1108 09:18:21.417536  325211 kubeadm.go:319] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1108 09:18:21.417588  325211 kubeadm.go:319] [kubelet-check] The kubelet is healthy after 1.0009061s
	I1108 09:18:21.417674  325211 kubeadm.go:319] [control-plane-check] Waiting for healthy control plane components. This can take up to 4m0s
	I1108 09:18:21.417744  325211 kubeadm.go:319] [control-plane-check] Checking kube-apiserver at https://192.168.103.2:8443/livez
	I1108 09:18:21.417824  325211 kubeadm.go:319] [control-plane-check] Checking kube-controller-manager at https://127.0.0.1:10257/healthz
	I1108 09:18:21.417894  325211 kubeadm.go:319] [control-plane-check] Checking kube-scheduler at https://127.0.0.1:10259/livez
	I1108 09:18:21.417957  325211 kubeadm.go:319] [control-plane-check] kube-controller-manager is healthy after 1.103306268s
	I1108 09:18:21.418014  325211 kubeadm.go:319] [control-plane-check] kube-scheduler is healthy after 2.592510436s
	I1108 09:18:21.418078  325211 kubeadm.go:319] [control-plane-check] kube-apiserver is healthy after 4.501564724s
	I1108 09:18:21.418169  325211 kubeadm.go:319] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1108 09:18:21.418299  325211 kubeadm.go:319] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1108 09:18:21.418366  325211 kubeadm.go:319] [upload-certs] Skipping phase. Please see --upload-certs
	I1108 09:18:21.418547  325211 kubeadm.go:319] [mark-control-plane] Marking the node newest-cni-620528 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1108 09:18:21.418595  325211 kubeadm.go:319] [bootstrap-token] Using token: dxtz3l.vknjl9wu6a3ee1z1
	I1108 09:18:21.421142  325211 out.go:252]   - Configuring RBAC rules ...
	I1108 09:18:21.421236  325211 kubeadm.go:319] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1108 09:18:21.421349  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1108 09:18:21.421474  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1108 09:18:21.421579  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1108 09:18:21.421693  325211 kubeadm.go:319] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1108 09:18:21.421785  325211 kubeadm.go:319] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1108 09:18:21.421900  325211 kubeadm.go:319] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1108 09:18:21.421940  325211 kubeadm.go:319] [addons] Applied essential addon: CoreDNS
	I1108 09:18:21.421983  325211 kubeadm.go:319] [addons] Applied essential addon: kube-proxy
	I1108 09:18:21.421989  325211 kubeadm.go:319] 
	I1108 09:18:21.422044  325211 kubeadm.go:319] Your Kubernetes control-plane has initialized successfully!
	I1108 09:18:21.422051  325211 kubeadm.go:319] 
	I1108 09:18:21.422121  325211 kubeadm.go:319] To start using your cluster, you need to run the following as a regular user:
	I1108 09:18:21.422127  325211 kubeadm.go:319] 
	I1108 09:18:21.422162  325211 kubeadm.go:319]   mkdir -p $HOME/.kube
	I1108 09:18:21.422254  325211 kubeadm.go:319]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1108 09:18:21.422353  325211 kubeadm.go:319]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1108 09:18:21.422364  325211 kubeadm.go:319] 
	I1108 09:18:21.422443  325211 kubeadm.go:319] Alternatively, if you are the root user, you can run:
	I1108 09:18:21.422453  325211 kubeadm.go:319] 
	I1108 09:18:21.422517  325211 kubeadm.go:319]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1108 09:18:21.422527  325211 kubeadm.go:319] 
	I1108 09:18:21.422596  325211 kubeadm.go:319] You should now deploy a pod network to the cluster.
	I1108 09:18:21.422682  325211 kubeadm.go:319] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1108 09:18:21.422792  325211 kubeadm.go:319]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1108 09:18:21.422804  325211 kubeadm.go:319] 
	I1108 09:18:21.422915  325211 kubeadm.go:319] You can now join any number of control-plane nodes by copying certificate authorities
	I1108 09:18:21.423005  325211 kubeadm.go:319] and service account keys on each node and then running the following as root:
	I1108 09:18:21.423013  325211 kubeadm.go:319] 
	I1108 09:18:21.423077  325211 kubeadm.go:319]   kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423178  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 \
	I1108 09:18:21.423209  325211 kubeadm.go:319] 	--control-plane 
	I1108 09:18:21.423218  325211 kubeadm.go:319] 
	I1108 09:18:21.423320  325211 kubeadm.go:319] Then you can join any number of worker nodes by running the following on each as root:
	I1108 09:18:21.423332  325211 kubeadm.go:319] 
	I1108 09:18:21.423415  325211 kubeadm.go:319] kubeadm join control-plane.minikube.internal:8443 --token dxtz3l.vknjl9wu6a3ee1z1 \
	I1108 09:18:21.423522  325211 kubeadm.go:319] 	--discovery-token-ca-cert-hash sha256:d18a5a5c12dde3c482a52cbee54372a219d8d40374fb3a9e5aa6663aac728575 
	I1108 09:18:21.423547  325211 cni.go:84] Creating CNI manager for ""
	I1108 09:18:21.423556  325211 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:21.424943  325211 out.go:179] * Configuring CNI (Container Networking Interface) ...
	I1108 09:18:21.426074  325211 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1108 09:18:21.430178  325211 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.34.1/kubectl ...
	I1108 09:18:21.430194  325211 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1108 09:18:21.443928  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1108 09:18:21.660106  325211 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1108 09:18:21.660208  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.660242  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes newest-cni-620528 minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700 minikube.k8s.io/version=v1.37.0 minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36 minikube.k8s.io/name=newest-cni-620528 minikube.k8s.io/primary=true
	I1108 09:18:21.748522  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:21.748523  325211 ops.go:34] apiserver oom_adj: -16
	I1108 09:18:22.249505  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:22.749414  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.249638  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:23.749545  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.249056  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:24.749589  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.249218  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:25.748898  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.249409  325211 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.34.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1108 09:18:26.325371  325211 kubeadm.go:1114] duration metric: took 4.665232347s to wait for elevateKubeSystemPrivileges
	I1108 09:18:26.325408  325211 kubeadm.go:403] duration metric: took 16.550096693s to StartCluster
	I1108 09:18:26.325428  325211 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.325506  325211 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:26.326602  325211 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:26.326868  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1108 09:18:26.326886  325211 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:18:26.326952  325211 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:18:26.327074  325211 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-620528"
	I1108 09:18:26.327096  325211 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-620528"
	I1108 09:18:26.327116  325211 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:26.327134  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.327098  325211 addons.go:70] Setting default-storageclass=true in profile "newest-cni-620528"
	I1108 09:18:26.327180  325211 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-620528"
	I1108 09:18:26.327530  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.327692  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.328462  325211 out.go:179] * Verifying Kubernetes components...
	I1108 09:18:26.330054  325211 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:26.353318  325211 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:18:26.353369  325211 addons.go:239] Setting addon default-storageclass=true in "newest-cni-620528"
	I1108 09:18:26.353412  325211 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:26.353939  325211 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:26.357811  325211 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.357831  325211 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:18:26.357895  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.384474  325211 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.384501  325211 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:18:26.384579  325211 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:26.390090  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.410190  325211 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33129 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:26.423195  325211 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.103.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.34.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1108 09:18:26.475839  325211 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:26.498839  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:26.519600  325211 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:26.611163  325211 start.go:977] {"host.minikube.internal": 192.168.103.1} host record injected into CoreDNS's ConfigMap
	I1108 09:18:26.612332  325211 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:18:26.612389  325211 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:18:26.813396  325211 api_server.go:72] duration metric: took 486.477097ms to wait for apiserver process to appear ...
	I1108 09:18:26.813427  325211 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:18:26.813448  325211 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:26.818119  325211 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:18:26.819017  325211 api_server.go:141] control plane version: v1.34.1
	I1108 09:18:26.819045  325211 api_server.go:131] duration metric: took 5.610526ms to wait for apiserver health ...
	I1108 09:18:26.819055  325211 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:18:26.820067  325211 out.go:179] * Enabled addons: storage-provisioner, default-storageclass
	I1108 09:18:26.821184  325211 addons.go:515] duration metric: took 494.232955ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I1108 09:18:26.822044  325211 system_pods.go:59] 8 kube-system pods found
	I1108 09:18:26.822071  325211 system_pods.go:61] "coredns-66bc5c9577-7fndk" [ee377f7d-6e12-40b3-9257-b0558cadc023] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822085  325211 system_pods.go:61] "etcd-newest-cni-620528" [d267a844-8f28-4d49-a9a3-f19643f494fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:18:26.822097  325211 system_pods.go:61] "kindnet-fk7tk" [8240271d-256f-4fde-82b4-0c071eb000b6] Running
	I1108 09:18:26.822110  325211 system_pods.go:61] "kube-apiserver-newest-cni-620528" [a9d10205-e74b-49a0-ab30-fc4274b6c40a] Running
	I1108 09:18:26.822119  325211 system_pods.go:61] "kube-controller-manager-newest-cni-620528" [5ca73710-f538-4265-a4f3-fe797f8e0cfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:18:26.822123  325211 system_pods.go:61] "kube-proxy-xrf7w" [ef13acfb-b7b4-4aba-8145-f2ce94813f8e] Running
	I1108 09:18:26.822130  325211 system_pods.go:61] "kube-scheduler-newest-cni-620528" [6dd7feec-3ba2-40c2-b761-3aa6855cf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:18:26.822134  325211 system_pods.go:61] "storage-provisioner" [4e2975a8-6a90-42a4-b1bb-b425b99ad8be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:26.822142  325211 system_pods.go:74] duration metric: took 3.081159ms to wait for pod list to return data ...
	I1108 09:18:26.822150  325211 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:18:26.824190  325211 default_sa.go:45] found service account: "default"
	I1108 09:18:26.824207  325211 default_sa.go:55] duration metric: took 2.050725ms for default service account to be created ...
	I1108 09:18:26.824220  325211 kubeadm.go:587] duration metric: took 497.30609ms to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:26.824239  325211 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:18:26.826499  325211 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:18:26.826520  325211 node_conditions.go:123] node cpu capacity is 8
	I1108 09:18:26.826531  325211 node_conditions.go:105] duration metric: took 2.287321ms to run NodePressure ...
	I1108 09:18:26.826540  325211 start.go:242] waiting for startup goroutines ...
	I1108 09:18:27.115331  325211 kapi.go:214] "coredns" deployment in "kube-system" namespace and "newest-cni-620528" context rescaled to 1 replicas
	I1108 09:18:27.115377  325211 start.go:247] waiting for cluster config update ...
	I1108 09:18:27.115389  325211 start.go:256] writing updated cluster config ...
	I1108 09:18:27.115700  325211 ssh_runner.go:195] Run: rm -f paused
	I1108 09:18:27.175370  325211 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:27.180420  325211 out.go:179] * Done! kubectl is now configured to use "newest-cni-620528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.482611114Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.48539499Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=79b4e55f-1942-40e7-90ef-b6477b0a7070 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.486097925Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=9058d25c-122b-46f0-bbe4-703b0dbc2d84 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.487235651Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.488122966Z" level=info msg="Ran pod sandbox b03ee3363fc108da841c12a361f556abb997b561ea1ca2773c20f3bc03de53e4 with infra container: kube-system/kube-proxy-xrf7w/POD" id=79b4e55f-1942-40e7-90ef-b6477b0a7070 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.488806273Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.489692536Z" level=info msg="Ran pod sandbox 857cc950d3aaa82ad537d5b02b7bea8eb380805174c15e326cfc833d577e020e with infra container: kube-system/kindnet-fk7tk/POD" id=9058d25c-122b-46f0-bbe4-703b0dbc2d84 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.491004128Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=e42f0f1b-eb69-4d90-8d52-f67d7a76ae31 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.49121547Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=1c9a9540-2869-4f56-977f-875664417cf4 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.491908936Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=6a21a8f5-6583-46d9-9cf6-14f69e6111a1 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.492082738Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=51367914-e148-49e4-a94b-510cbbab9824 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.49603877Z" level=info msg="Creating container: kube-system/kindnet-fk7tk/kindnet-cni" id=a2ce9f84-167d-459d-a34a-bbf6dd92a17a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.496141454Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.498974655Z" level=info msg="Creating container: kube-system/kube-proxy-xrf7w/kube-proxy" id=874f82d4-0547-4c9b-a07a-b2d7e6e953c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.499108146Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.501783345Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.50240168Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.504457356Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.504913872Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.53509673Z" level=info msg="Created container 1786855bd49dbd134bd9cc39a996e73ce41027a99f2493564a2577d55ff9250b: kube-system/kindnet-fk7tk/kindnet-cni" id=a2ce9f84-167d-459d-a34a-bbf6dd92a17a name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.536103788Z" level=info msg="Starting container: 1786855bd49dbd134bd9cc39a996e73ce41027a99f2493564a2577d55ff9250b" id=9eb66774-7042-495f-a41e-ae39d0ac7779 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.536182963Z" level=info msg="Created container 5210dec1bc3711eee65dbfe66ef1248f63028ae4c8a26d4bdb0c3d0043818dcb: kube-system/kube-proxy-xrf7w/kube-proxy" id=874f82d4-0547-4c9b-a07a-b2d7e6e953c3 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.536771301Z" level=info msg="Starting container: 5210dec1bc3711eee65dbfe66ef1248f63028ae4c8a26d4bdb0c3d0043818dcb" id=49b7de63-02bd-49cc-9def-021a7762875a name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.538495113Z" level=info msg="Started container" PID=1607 containerID=1786855bd49dbd134bd9cc39a996e73ce41027a99f2493564a2577d55ff9250b description=kube-system/kindnet-fk7tk/kindnet-cni id=9eb66774-7042-495f-a41e-ae39d0ac7779 name=/runtime.v1.RuntimeService/StartContainer sandboxID=857cc950d3aaa82ad537d5b02b7bea8eb380805174c15e326cfc833d577e020e
	Nov 08 09:18:26 newest-cni-620528 crio[770]: time="2025-11-08T09:18:26.540020118Z" level=info msg="Started container" PID=1609 containerID=5210dec1bc3711eee65dbfe66ef1248f63028ae4c8a26d4bdb0c3d0043818dcb description=kube-system/kube-proxy-xrf7w/kube-proxy id=49b7de63-02bd-49cc-9def-021a7762875a name=/runtime.v1.RuntimeService/StartContainer sandboxID=b03ee3363fc108da841c12a361f556abb997b561ea1ca2773c20f3bc03de53e4
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	5210dec1bc371       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   1 second ago        Running             kube-proxy                0                   b03ee3363fc10       kube-proxy-xrf7w                            kube-system
	1786855bd49db       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   1 second ago        Running             kindnet-cni               0                   857cc950d3aaa       kindnet-fk7tk                               kube-system
	fc2650fc925de       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   12 seconds ago      Running             kube-apiserver            0                   e4e07aa54181a       kube-apiserver-newest-cni-620528            kube-system
	ddad39181a738       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   12 seconds ago      Running             kube-controller-manager   0                   3597a340ad860       kube-controller-manager-newest-cni-620528   kube-system
	5fb1a2b2eca45       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   12 seconds ago      Running             etcd                      0                   54dc72143c957       etcd-newest-cni-620528                      kube-system
	1011161e9c15d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   12 seconds ago      Running             kube-scheduler            0                   aa22a735f8081       kube-scheduler-newest-cni-620528            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-620528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-620528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-620528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:18:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-620528
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:18:20 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:18:20 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:18:20 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:18:20 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/cni/net.d/. Has your network provider started?
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-620528
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fd9cdc5f-2e20-41a6-aefd-53097190daa1
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-620528                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         8s
	  kube-system                 kindnet-fk7tk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      2s
	  kube-system                 kube-apiserver-newest-cni-620528             250m (3%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-controller-manager-newest-cni-620528    200m (2%)     0 (0%)      0 (0%)           0 (0%)         8s
	  kube-system                 kube-proxy-xrf7w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  kube-system                 kube-scheduler-newest-cni-620528             100m (1%)     0 (0%)      0 (0%)           0 (0%)         8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 1s                 kube-proxy       
	  Normal  Starting                 13s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13s (x8 over 13s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13s (x8 over 13s)  kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13s (x8 over 13s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  Starting                 8s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  8s                 kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    8s                 kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8s                 kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3s                 node-controller  Node newest-cni-620528 event: Registered Node newest-cni-620528 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [5fb1a2b2eca45672d2d253667b1ecdca6b982733fd7ebb6703643b7d3f22c651] <==
	{"level":"warn","ts":"2025-11-08T09:18:17.455731Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54242","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.461706Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54252","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.469672Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54270","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.477023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54278","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.484119Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54288","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.499231Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54312","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.505595Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54336","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.511948Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54348","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.518776Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54350","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.528477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54382","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.534738Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54388","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.541233Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54414","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.547473Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54422","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.554241Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54460","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.561379Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54472","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.567751Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.574313Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54508","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.580033Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54534","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.587024Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54552","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.593441Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54568","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.610555Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54596","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.613908Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54610","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.620309Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54636","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.626345Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54642","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:17.676428Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:54656","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:28 up  1:00,  0 user,  load average: 3.51, 3.84, 2.60
	Linux newest-cni-620528 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [1786855bd49dbd134bd9cc39a996e73ce41027a99f2493564a2577d55ff9250b] <==
	I1108 09:18:26.816047       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:18:26.816396       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:18:26.816543       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:18:26.816563       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:18:26.816594       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:18:27Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:18:27.020444       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:18:27.021149       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:18:27.115620       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:18:27.116072       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:18:27.415935       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:18:27.415970       1 metrics.go:72] Registering metrics
	I1108 09:18:27.416051       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [fc2650fc925de24f5e89f255ae35e2ed3d573509238b69891791cbafbff6ddd0] <==
	I1108 09:18:18.141144       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:18:18.141270       1 shared_informer.go:356] "Caches are synced" controller="crd-autoregister"
	I1108 09:18:18.141357       1 aggregator.go:171] initial CRD sync complete...
	I1108 09:18:18.141412       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:18:18.141424       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:18:18.141431       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:18:18.141555       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:18:18.332004       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:18:19.039380       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I1108 09:18:19.043392       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I1108 09:18:19.043409       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:18:19.520539       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:18:19.558496       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:18:19.644762       1 alloc.go:328] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W1108 09:18:19.650774       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.103.2]
	I1108 09:18:19.652070       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:18:19.656536       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:18:20.054216       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:18:20.812987       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:18:20.822855       1 alloc.go:328] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I1108 09:18:20.829501       1 controller.go:667] quota admission added evaluator for: daemonsets.apps
	I1108 09:18:25.807178       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	I1108 09:18:25.957944       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:18:25.963159       1 cidrallocator.go:277] updated ClusterIP allocator for Service CIDR 10.96.0.0/12
	I1108 09:18:26.155657       1 controller.go:667] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [ddad39181a7384dc66f04001ee1585642582df05bab3bf93af7372bf9b91149a] <==
	I1108 09:18:25.053230       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:18:25.053317       1 shared_informer.go:356] "Caches are synced" controller="taint-eviction-controller"
	I1108 09:18:25.053432       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:18:25.053500       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:18:25.053516       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:18:25.053622       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-620528"
	I1108 09:18:25.053645       1 shared_informer.go:356] "Caches are synced" controller="PV protection"
	I1108 09:18:25.053687       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:18:25.053758       1 shared_informer.go:356] "Caches are synced" controller="ReplicaSet"
	I1108 09:18:25.053814       1 shared_informer.go:356] "Caches are synced" controller="endpoint"
	I1108 09:18:25.053833       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:18:25.053915       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:18:25.054132       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:18:25.054242       1 shared_informer.go:356] "Caches are synced" controller="resource_claim"
	I1108 09:18:25.054795       1 shared_informer.go:356] "Caches are synced" controller="job"
	I1108 09:18:25.055954       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-legacy-unknown"
	I1108 09:18:25.055981       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-serving"
	I1108 09:18:25.056015       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kube-apiserver-client"
	I1108 09:18:25.056050       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrsigning-kubelet-client"
	I1108 09:18:25.057215       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:18:25.057216       1 shared_informer.go:356] "Caches are synced" controller="service account"
	I1108 09:18:25.063008       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:18:25.064111       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:18:25.070571       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:18:25.084219       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [5210dec1bc3711eee65dbfe66ef1248f63028ae4c8a26d4bdb0c3d0043818dcb] <==
	I1108 09:18:26.586322       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:18:26.656310       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:18:26.757110       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:18:26.757163       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1108 09:18:26.757261       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:18:26.777961       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:18:26.778013       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:18:26.784166       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:18:26.784539       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:18:26.784569       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:18:26.786094       1 config.go:200] "Starting service config controller"
	I1108 09:18:26.786172       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:18:26.786225       1 config.go:309] "Starting node config controller"
	I1108 09:18:26.786238       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:18:26.786249       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:18:26.786482       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:18:26.786500       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:18:26.786479       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:18:26.786525       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:18:26.887500       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:18:26.887558       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	I1108 09:18:26.887568       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	
	
	==> kube-scheduler [1011161e9c15dd82580526e8b8dab304e328b284b6dca8e36f488840c34e5ab5] <==
	E1108 09:18:18.087738       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node"
	E1108 09:18:18.087875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:18:18.088095       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:18:18.088098       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicaSet"
	E1108 09:18:18.088145       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:18:18.088195       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StatefulSet"
	E1108 09:18:18.088391       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:18:18.088430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:18:18.088430       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolumeClaim"
	E1108 09:18:18.088529       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Pod"
	E1108 09:18:18.962875       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service"
	E1108 09:18:18.984914       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSINode"
	E1108 09:18:19.073648       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ReplicationController"
	E1108 09:18:19.079859       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceSlice: resourceslices.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceslices\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceSlice"
	E1108 09:18:19.102305       1 reflector.go:205] "Failed to watch" err="failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.VolumeAttachment"
	E1108 09:18:19.157516       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PersistentVolume"
	E1108 09:18:19.221931       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver"
	E1108 09:18:19.222917       1 reflector.go:205] "Failed to watch" err="failed to list *v1.DeviceClass: deviceclasses.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"deviceclasses\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.DeviceClass"
	E1108 09:18:19.254639       1 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIStorageCapacity"
	E1108 09:18:19.261713       1 reflector.go:205] "Failed to watch" err="failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.StorageClass"
	E1108 09:18:19.344199       1 reflector.go:205] "Failed to watch" err="failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.PodDisruptionBudget"
	E1108 09:18:19.354212       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ResourceClaim: resourceclaims.resource.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"resourceclaims\" in API group \"resource.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.ResourceClaim"
	E1108 09:18:19.357501       1 reflector.go:205] "Failed to watch" err="failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Namespace"
	E1108 09:18:19.523209       1 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError" reflector="runtime/asm_amd64.s:1700" type="*v1.ConfigMap"
	I1108 09:18:22.184834       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.636136    1330 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.672651    1330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.672764    1330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.672823    1330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.672916    1330 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: E1108 09:18:21.681051    1330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-620528\" already exists" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: E1108 09:18:21.681739    1330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-620528\" already exists" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: E1108 09:18:21.682220    1330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-620528\" already exists" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: E1108 09:18:21.682351    1330 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-620528\" already exists" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.719499    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/etcd-newest-cni-620528" podStartSLOduration=1.7194782229999999 podStartE2EDuration="1.719478223s" podCreationTimestamp="2025-11-08 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:21.719436108 +0000 UTC m=+1.152118217" watchObservedRunningTime="2025-11-08 09:18:21.719478223 +0000 UTC m=+1.152160331"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.743368    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-newest-cni-620528" podStartSLOduration=1.743334662 podStartE2EDuration="1.743334662s" podCreationTimestamp="2025-11-08 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:21.743253366 +0000 UTC m=+1.175935475" watchObservedRunningTime="2025-11-08 09:18:21.743334662 +0000 UTC m=+1.176016771"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.743496    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-newest-cni-620528" podStartSLOduration=1.743486764 podStartE2EDuration="1.743486764s" podCreationTimestamp="2025-11-08 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:21.732095351 +0000 UTC m=+1.164777480" watchObservedRunningTime="2025-11-08 09:18:21.743486764 +0000 UTC m=+1.176168873"
	Nov 08 09:18:21 newest-cni-620528 kubelet[1330]: I1108 09:18:21.752707    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-newest-cni-620528" podStartSLOduration=1.752689097 podStartE2EDuration="1.752689097s" podCreationTimestamp="2025-11-08 09:18:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:21.752556221 +0000 UTC m=+1.185238331" watchObservedRunningTime="2025-11-08 09:18:21.752689097 +0000 UTC m=+1.185371206"
	Nov 08 09:18:25 newest-cni-620528 kubelet[1330]: I1108 09:18:25.075967    1330 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:18:25 newest-cni-620528 kubelet[1330]: I1108 09:18:25.076678    1330 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278242    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-cni-cfg\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278361    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-kube-proxy\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278434    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-lib-modules\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278482    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk27x\" (UniqueName: \"kubernetes.io/projected/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-kube-api-access-xk27x\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278555    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhfh2\" (UniqueName: \"kubernetes.io/projected/8240271d-256f-4fde-82b4-0c071eb000b6-kube-api-access-jhfh2\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278587    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-xtables-lock\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278612    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-lib-modules\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.278633    1330 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-xtables-lock\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.718272    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xrf7w" podStartSLOduration=0.718246447 podStartE2EDuration="718.246447ms" podCreationTimestamp="2025-11-08 09:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:26.718223192 +0000 UTC m=+6.150905301" watchObservedRunningTime="2025-11-08 09:18:26.718246447 +0000 UTC m=+6.150928556"
	Nov 08 09:18:26 newest-cni-620528 kubelet[1330]: I1108 09:18:26.718454    1330 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-fk7tk" podStartSLOduration=0.718444623 podStartE2EDuration="718.444623ms" podCreationTimestamp="2025-11-08 09:18:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 09:18:26.705660259 +0000 UTC m=+6.138342378" watchObservedRunningTime="2025-11-08 09:18:26.718444623 +0000 UTC m=+6.151126733"
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-620528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7fndk storage-provisioner
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/EnableAddonWhileActive]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner: exit status 1 (59.791467ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7fndk" not found
	Error from server (NotFound): pods "storage-provisioner" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (2.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (5.57s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-620528 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p newest-cni-620528 --alsologtostderr -v=1: exit status 80 (2.150508335s)

                                                
                                                
-- stdout --
	* Pausing node newest-cni-620528 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:18:42.651484  336639 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:42.651605  336639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:42.651614  336639 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:42.651620  336639 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:42.651846  336639 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:18:42.652076  336639 out.go:368] Setting JSON to false
	I1108 09:18:42.652143  336639 mustload.go:66] Loading cluster: newest-cni-620528
	I1108 09:18:42.652508  336639 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:42.652939  336639 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:42.671015  336639 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:42.671370  336639 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:42.731592  336639 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-08 09:18:42.721727871 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:42.732206  336639 pause.go:60] "namespaces" [kube-system kubernetes-dashboard istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-pause-interval:1m0s auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 binary-mirror: bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:%!s(bool=false) disable-coredns-log:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disable-metrics:%!s(bool=false) disable-optimizations:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) embedcerts:%!s(bool=false) enable-default-
cni:%!s(bool=false) extra-config: extra-disks:%!s(int=0) feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) gpus: ha:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.59.1/24 host-only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube-builds/iso/21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.37.0-1762018871-21834/minikube-v1.37.0-1762018871-21834-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.37.0-1762018871-21834-amd64.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qe
mu-uri:qemu:///system listen-address: maxauditentries:%!s(int=1000) memory: mount:%!s(bool=false) mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:%!s(int=262144) mount-options:[] mount-port:0 mount-string: mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-kubernetes:%!s(bool=false) no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:newest-cni-620528 purge:%!s(bool=false) qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) rootless:%!s(bool=false) schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:%!s(bool=false) socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root static-ip: subnet: trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:%!s(bool=true) wantupdatenotification:%!s(bool=true)
wantvirtualboxdriverwarning:%!s(bool=true)]="(MISSING)"
	I1108 09:18:42.734090  336639 out.go:179] * Pausing node newest-cni-620528 ... 
	I1108 09:18:42.735224  336639 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:42.735526  336639 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:42.735564  336639 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:42.753407  336639 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:42.846249  336639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:42.858784  336639 pause.go:52] kubelet running: true
	I1108 09:18:42.858876  336639 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:42.990118  336639 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:42.990210  336639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:43.056523  336639 cri.go:89] found id: "bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52"
	I1108 09:18:43.056546  336639 cri.go:89] found id: "cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91"
	I1108 09:18:43.056549  336639 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:43.056552  336639 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:43.056555  336639 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:43.056558  336639 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:43.056560  336639 cri.go:89] found id: ""
	I1108 09:18:43.056602  336639 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:43.068357  336639 retry.go:31] will retry after 152.484339ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:43Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:43.221844  336639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:43.234838  336639 pause.go:52] kubelet running: false
	I1108 09:18:43.234906  336639 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:43.348670  336639 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:43.348743  336639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:43.413077  336639 cri.go:89] found id: "bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52"
	I1108 09:18:43.413119  336639 cri.go:89] found id: "cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91"
	I1108 09:18:43.413126  336639 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:43.413131  336639 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:43.413136  336639 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:43.413141  336639 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:43.413145  336639 cri.go:89] found id: ""
	I1108 09:18:43.413192  336639 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:43.425139  336639 retry.go:31] will retry after 261.914588ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:43Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:43.687702  336639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:43.700814  336639 pause.go:52] kubelet running: false
	I1108 09:18:43.700870  336639 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:43.817529  336639 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:43.817620  336639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:43.898807  336639 cri.go:89] found id: "bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52"
	I1108 09:18:43.898829  336639 cri.go:89] found id: "cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91"
	I1108 09:18:43.898833  336639 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:43.898836  336639 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:43.898839  336639 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:43.898842  336639 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:43.898844  336639 cri.go:89] found id: ""
	I1108 09:18:43.898881  336639 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:43.910469  336639 retry.go:31] will retry after 611.14634ms: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:43Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:44.522243  336639 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:18:44.536508  336639 pause.go:52] kubelet running: false
	I1108 09:18:44.536576  336639 ssh_runner.go:195] Run: sudo systemctl disable --now kubelet
	I1108 09:18:44.654046  336639 cri.go:54] listing CRI containers in root : {State:running Name: Namespaces:[kube-system kubernetes-dashboard istio-operator]}
	I1108 09:18:44.654135  336639 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I1108 09:18:44.718860  336639 cri.go:89] found id: "bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52"
	I1108 09:18:44.718885  336639 cri.go:89] found id: "cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91"
	I1108 09:18:44.718889  336639 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:44.718892  336639 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:44.718895  336639 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:44.718898  336639 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:44.718901  336639 cri.go:89] found id: ""
	I1108 09:18:44.718937  336639 ssh_runner.go:195] Run: sudo runc list -f json
	I1108 09:18:44.732535  336639 out.go:203] 
	W1108 09:18:44.733949  336639 out.go:285] X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:44Z" level=error msg="open /run/runc: no such file or directory"
	
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:44Z" level=error msg="open /run/runc: no such file or directory"
	
	W1108 09:18:44.733968  336639 out.go:285] * 
	* 
	W1108 09:18:44.738087  336639 out.go:308] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log                   │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I1108 09:18:44.739478  336639 out.go:203] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:309: out/minikube-linux-amd64 pause -p newest-cni-620528 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-620528
helpers_test.go:243: (dbg) docker inspect newest-cni-620528:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	        "Created": "2025-11-08T09:18:04.364605976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:18:32.184543618Z",
	            "FinishedAt": "2025-11-08T09:18:31.353088595Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hosts",
	        "LogPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9-json.log",
	        "Name": "/newest-cni-620528",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-620528:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-620528",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	                "LowerDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-620528",
	                "Source": "/var/lib/docker/volumes/newest-cni-620528/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-620528",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-620528",
	                "name.minikube.sigs.k8s.io": "newest-cni-620528",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "152941532aea24d70365f6c670e3d1c6393c84b8eb777a1468fdf6172d3a5f17",
	            "SandboxKey": "/var/run/docker/netns/152941532aea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-620528": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:55:bb:85:24:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92c9b3581086a3ec71939baea725cf0a225bd4e6d308483c2f50dd74f662a243",
	                    "EndpointID": "41066b804fabe43a192113f88da1e693b1eb84f71dd2001248d5a753cbac8fb8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-620528",
	                        "e2bd4d8f6d3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528: exit status 2 (317.373678ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-620528 logs -n 25
E1108 09:18:45.876822    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/calico-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-677902 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-677902 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ stop    │ -p newest-cni-620528 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-677902                                                                                                                                                                                                               │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-620528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-677902                                                                                                                                                                                                               │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ newest-cni-620528 image list --format=json                                                                                                                                                                                                    │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p newest-cni-620528 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:18:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:18:31.953826  334359 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:31.954048  334359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:31.954056  334359 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:31.954060  334359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:31.954271  334359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:18:31.954704  334359 out.go:368] Setting JSON to false
	I1108 09:18:31.955653  334359 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1762589849,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:18:31.955737  334359 start.go:143] virtualization: kvm guest
	I1108 09:18:31.957774  334359 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:18:31.959088  334359 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:18:31.959111  334359 notify.go:221] Checking for updates...
	I1108 09:18:31.961930  334359 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:18:31.963381  334359 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:31.964619  334359 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:18:31.965870  334359 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:18:31.967135  334359 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:18:31.968759  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:31.969172  334359 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:18:31.993139  334359 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:18:31.993260  334359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:32.049546  334359 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-08 09:18:32.039509341 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:32.049695  334359 docker.go:319] overlay module found
	I1108 09:18:32.052141  334359 out.go:179] * Using the docker driver based on existing profile
	I1108 09:18:32.053364  334359 start.go:309] selected driver: docker
	I1108 09:18:32.053378  334359 start.go:930] validating driver "docker" against &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:32.053456  334359 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:18:32.054046  334359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:32.111861  334359 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-08 09:18:32.102294877 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:32.112146  334359 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:32.112172  334359 cni.go:84] Creating CNI manager for ""
	I1108 09:18:32.112216  334359 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:32.112247  334359 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:32.114073  334359 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:18:32.115399  334359 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:18:32.116707  334359 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:18:32.117968  334359 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:32.117998  334359 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:18:32.118013  334359 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:18:32.118038  334359 cache.go:59] Caching tarball of preloaded images
	I1108 09:18:32.118164  334359 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:18:32.118178  334359 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:18:32.118356  334359 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:32.138662  334359 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:18:32.138688  334359 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:18:32.138703  334359 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:18:32.138730  334359 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:18:32.138796  334359 start.go:364] duration metric: took 44.667µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:18:32.138817  334359 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:18:32.138823  334359 fix.go:54] fixHost starting: 
	I1108 09:18:32.139093  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:32.156629  334359 fix.go:112] recreateIfNeeded on newest-cni-620528: state=Stopped err=<nil>
	W1108 09:18:32.156657  334359 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:18:32.158610  334359 out.go:252] * Restarting existing docker container for "newest-cni-620528" ...
	I1108 09:18:32.158677  334359 cli_runner.go:164] Run: docker start newest-cni-620528
	I1108 09:18:32.438537  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:32.461107  334359 kic.go:430] container "newest-cni-620528" state is running.
	I1108 09:18:32.461556  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:32.482947  334359 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:32.483170  334359 machine.go:94] provisionDockerMachine start ...
	I1108 09:18:32.483235  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:32.503044  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:32.503357  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:32.503373  334359 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:18:32.503937  334359 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52836->127.0.0.1:33134: read: connection reset by peer
	I1108 09:18:35.632305  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:35.632364  334359 ubuntu.go:182] provisioning hostname "newest-cni-620528"
	I1108 09:18:35.632433  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:35.652178  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:35.652420  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:35.652443  334359 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-620528 && echo "newest-cni-620528" | sudo tee /etc/hostname
	I1108 09:18:35.789044  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:35.789134  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:35.807870  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:35.808132  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:35.808151  334359 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-620528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-620528/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-620528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:18:35.934984  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:18:35.935010  334359 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:18:35.935037  334359 ubuntu.go:190] setting up certificates
	I1108 09:18:35.935074  334359 provision.go:84] configureAuth start
	I1108 09:18:35.935126  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:35.953694  334359 provision.go:143] copyHostCerts
	I1108 09:18:35.953748  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:18:35.953766  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:18:35.953829  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:18:35.953961  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:18:35.953974  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:18:35.954006  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:18:35.954064  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:18:35.954072  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:18:35.954094  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:18:35.954151  334359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-620528 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-620528]
	I1108 09:18:36.080750  334359 provision.go:177] copyRemoteCerts
	I1108 09:18:36.080811  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:18:36.080844  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.099244  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.192779  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:18:36.209789  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:18:36.226539  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:18:36.243134  334359 provision.go:87] duration metric: took 308.049591ms to configureAuth
	I1108 09:18:36.243164  334359 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:18:36.243376  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:36.243513  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.262092  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:36.262377  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:36.262400  334359 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:18:36.510057  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:18:36.510082  334359 machine.go:97] duration metric: took 4.026899157s to provisionDockerMachine
	I1108 09:18:36.510095  334359 start.go:293] postStartSetup for "newest-cni-620528" (driver="docker")
	I1108 09:18:36.510108  334359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:18:36.510175  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:18:36.510217  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.528769  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.621635  334359 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:18:36.625056  334359 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:18:36.625080  334359 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:18:36.625090  334359 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:18:36.625172  334359 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:18:36.625243  334359 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:18:36.625377  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:18:36.632681  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:36.649514  334359 start.go:296] duration metric: took 139.40288ms for postStartSetup
	I1108 09:18:36.649610  334359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:18:36.649648  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.667733  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.758494  334359 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:18:36.763276  334359 fix.go:56] duration metric: took 4.624446908s for fixHost
	I1108 09:18:36.763319  334359 start.go:83] releasing machines lock for "newest-cni-620528", held for 4.624510125s
	I1108 09:18:36.763383  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:36.781602  334359 ssh_runner.go:195] Run: cat /version.json
	I1108 09:18:36.781652  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.781698  334359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:18:36.781748  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.801220  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.801805  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.891347  334359 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:36.943300  334359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:18:36.977988  334359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:18:36.982628  334359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:18:36.982679  334359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:18:36.990136  334359 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:18:36.990158  334359 start.go:496] detecting cgroup driver to use...
	I1108 09:18:36.990189  334359 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:18:36.990229  334359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:18:37.004070  334359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:18:37.016204  334359 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:18:37.016252  334359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:18:37.031042  334359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:18:37.042796  334359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:18:37.116169  334359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:18:37.197068  334359 docker.go:234] disabling docker service ...
	I1108 09:18:37.197150  334359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:18:37.211457  334359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:18:37.223640  334359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:18:37.298267  334359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:18:37.377160  334359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:18:37.389141  334359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:18:37.403403  334359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:18:37.403457  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.412409  334359 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:18:37.412477  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.421158  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.429474  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.437775  334359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:18:37.445932  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.454974  334359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.463427  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.472078  334359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:18:37.479077  334359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:18:37.486652  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:37.565514  334359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:18:37.674157  334359 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:18:37.674225  334359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:18:37.678270  334359 start.go:564] Will wait 60s for crictl version
	I1108 09:18:37.678349  334359 ssh_runner.go:195] Run: which crictl
	I1108 09:18:37.681747  334359 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:18:37.706627  334359 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:18:37.706721  334359 ssh_runner.go:195] Run: crio --version
	I1108 09:18:37.734071  334359 ssh_runner.go:195] Run: crio --version
	I1108 09:18:37.764547  334359 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:18:37.766137  334359 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:18:37.784399  334359 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:18:37.788528  334359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:37.800335  334359 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:18:37.801624  334359 kubeadm.go:884] updating cluster {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:18:37.801765  334359 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:37.801841  334359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:37.832474  334359 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:37.832495  334359 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:18:37.832541  334359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:37.857934  334359 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:37.857955  334359 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:18:37.857962  334359 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:18:37.858055  334359 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-620528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:18:37.858134  334359 ssh_runner.go:195] Run: crio config
	I1108 09:18:37.903187  334359 cni.go:84] Creating CNI manager for ""
	I1108 09:18:37.903211  334359 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:37.903228  334359 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:18:37.903247  334359 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-620528 NodeName:newest-cni-620528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:18:37.903372  334359 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-620528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:18:37.903428  334359 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:18:37.911588  334359 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:18:37.911640  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:18:37.919259  334359 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:18:37.931487  334359 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:18:37.943791  334359 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:18:37.955842  334359 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:18:37.959448  334359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:37.969421  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:38.048977  334359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:38.072616  334359 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528 for IP: 192.168.103.2
	I1108 09:18:38.072650  334359 certs.go:195] generating shared ca certs ...
	I1108 09:18:38.072673  334359 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.072837  334359 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:18:38.072876  334359 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:18:38.072885  334359 certs.go:257] generating profile certs ...
	I1108 09:18:38.072978  334359 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key
	I1108 09:18:38.073036  334359 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34
	I1108 09:18:38.073085  334359 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key
	I1108 09:18:38.073189  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:18:38.073218  334359 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:18:38.073227  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:18:38.073248  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:18:38.073270  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:18:38.073326  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:18:38.073374  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:38.073971  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:18:38.092677  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:18:38.110876  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:18:38.129782  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:18:38.151737  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:18:38.169621  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:18:38.186099  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:18:38.202890  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:18:38.219921  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:18:38.236803  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:18:38.253736  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:18:38.271696  334359 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:18:38.283947  334359 ssh_runner.go:195] Run: openssl version
	I1108 09:18:38.290131  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:18:38.298700  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.302484  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.302538  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.336062  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:18:38.344338  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:18:38.352566  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.356110  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.356166  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.389582  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:18:38.397744  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:18:38.406339  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.409982  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.410038  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.445707  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:18:38.454145  334359 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:18:38.458313  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:18:38.492065  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:18:38.526048  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:18:38.561206  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:18:38.603651  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:18:38.651170  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:18:38.695091  334359 kubeadm.go:401] StartCluster: {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:38.695189  334359 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:38.695259  334359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:38.734650  334359 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:38.734672  334359 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:38.734676  334359 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:38.734679  334359 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:38.734682  334359 cri.go:89] found id: ""
	I1108 09:18:38.734721  334359 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:18:38.747269  334359 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:38Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:38.747371  334359 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:18:38.755122  334359 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:18:38.755140  334359 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:18:38.755186  334359 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:18:38.762890  334359 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:18:38.763314  334359 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-620528" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:38.763450  334359 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5860/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-620528" cluster setting kubeconfig missing "newest-cni-620528" context setting]
	I1108 09:18:38.763793  334359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.764931  334359 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:18:38.773493  334359 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1108 09:18:38.773517  334359 kubeadm.go:602] duration metric: took 18.371472ms to restartPrimaryControlPlane
	I1108 09:18:38.773525  334359 kubeadm.go:403] duration metric: took 78.447318ms to StartCluster
	I1108 09:18:38.773540  334359 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.773604  334359 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:38.774148  334359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.774411  334359 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:18:38.774487  334359 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:18:38.774571  334359 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-620528"
	I1108 09:18:38.774590  334359 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-620528"
	W1108 09:18:38.774598  334359 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:18:38.774627  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.774612  334359 addons.go:70] Setting dashboard=true in profile "newest-cni-620528"
	I1108 09:18:38.774635  334359 addons.go:70] Setting default-storageclass=true in profile "newest-cni-620528"
	I1108 09:18:38.774662  334359 addons.go:239] Setting addon dashboard=true in "newest-cni-620528"
	I1108 09:18:38.774667  334359 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-620528"
	W1108 09:18:38.774671  334359 addons.go:248] addon dashboard should already be in state true
	I1108 09:18:38.774698  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.774708  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:38.774988  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.775128  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.775155  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.777872  334359 out.go:179] * Verifying Kubernetes components...
	I1108 09:18:38.779377  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:38.800691  334359 addons.go:239] Setting addon default-storageclass=true in "newest-cni-620528"
	W1108 09:18:38.800716  334359 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:18:38.800748  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.801241  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.805451  334359 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:18:38.806417  334359 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:18:38.807401  334359 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:38.807449  334359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:18:38.807507  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.809476  334359 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:18:38.810654  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:18:38.810687  334359 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:18:38.810758  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.840631  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.841954  334359 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:38.842012  334359 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:18:38.842073  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.846258  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.866362  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.916091  334359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:38.931693  334359 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:18:38.931764  334359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:18:38.945969  334359 api_server.go:72] duration metric: took 171.524429ms to wait for apiserver process to appear ...
	I1108 09:18:38.945993  334359 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:18:38.946012  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:38.952649  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:18:38.952674  334359 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:18:38.958048  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:38.966551  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:18:38.966577  334359 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:18:38.980914  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:38.983121  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:18:38.983146  334359 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:18:38.999186  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:18:38.999208  334359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:18:39.016587  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:18:39.016614  334359 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:18:39.033445  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:18:39.033469  334359 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:18:39.049746  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:18:39.049767  334359 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:18:39.062258  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:18:39.062297  334359 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:18:39.074544  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:18:39.074569  334359 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:18:39.087187  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:18:40.531705  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:18:40.531742  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:18:40.531759  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:40.549663  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1108 09:18:40.549717  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1108 09:18:40.946086  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:40.950536  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:18:40.950563  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:18:41.069056  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088094299s)
	I1108 09:18:41.069683  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98245219s)
	I1108 09:18:41.069770  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.111687076s)
	I1108 09:18:41.071523  334359 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-620528 addons enable metrics-server
	
	I1108 09:18:41.080770  334359 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:18:41.082116  334359 addons.go:515] duration metric: took 2.307638098s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:18:41.447110  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:41.451345  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:18:41.451368  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:18:41.946978  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:41.951154  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:18:41.952165  334359 api_server.go:141] control plane version: v1.34.1
	I1108 09:18:41.952195  334359 api_server.go:131] duration metric: took 3.006194674s to wait for apiserver health ...
	I1108 09:18:41.952206  334359 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:18:41.955755  334359 system_pods.go:59] 8 kube-system pods found
	I1108 09:18:41.955789  334359 system_pods.go:61] "coredns-66bc5c9577-7fndk" [ee377f7d-6e12-40b3-9257-b0558cadc023] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:41.955799  334359 system_pods.go:61] "etcd-newest-cni-620528" [d267a844-8f28-4d49-a9a3-f19643f494fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:18:41.955810  334359 system_pods.go:61] "kindnet-fk7tk" [8240271d-256f-4fde-82b4-0c071eb000b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 09:18:41.955816  334359 system_pods.go:61] "kube-apiserver-newest-cni-620528" [a9d10205-e74b-49a0-ab30-fc4274b6c40a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:18:41.955825  334359 system_pods.go:61] "kube-controller-manager-newest-cni-620528" [5ca73710-f538-4265-a4f3-fe797f8e0cfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:18:41.955835  334359 system_pods.go:61] "kube-proxy-xrf7w" [ef13acfb-b7b4-4aba-8145-f2ce94813f8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:18:41.955843  334359 system_pods.go:61] "kube-scheduler-newest-cni-620528" [6dd7feec-3ba2-40c2-b761-3aa6855cf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:18:41.955849  334359 system_pods.go:61] "storage-provisioner" [4e2975a8-6a90-42a4-b1bb-b425b99ad8be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:41.955857  334359 system_pods.go:74] duration metric: took 3.644129ms to wait for pod list to return data ...
	I1108 09:18:41.955864  334359 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:18:41.958098  334359 default_sa.go:45] found service account: "default"
	I1108 09:18:41.958116  334359 default_sa.go:55] duration metric: took 2.246753ms for default service account to be created ...
	I1108 09:18:41.958126  334359 kubeadm.go:587] duration metric: took 3.183687884s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:41.958150  334359 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:18:41.960411  334359 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:18:41.960432  334359 node_conditions.go:123] node cpu capacity is 8
	I1108 09:18:41.960445  334359 node_conditions.go:105] duration metric: took 2.291276ms to run NodePressure ...
	I1108 09:18:41.960455  334359 start.go:242] waiting for startup goroutines ...
	I1108 09:18:41.960462  334359 start.go:247] waiting for cluster config update ...
	I1108 09:18:41.960472  334359 start.go:256] writing updated cluster config ...
	I1108 09:18:41.960711  334359 ssh_runner.go:195] Run: rm -f paused
	I1108 09:18:42.008417  334359 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:42.011648  334359 out.go:179] * Done! kubectl is now configured to use "newest-cni-620528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.746641008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.750523983Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e20830eb-d7ba-4d39-9684-c8ede54613de name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.751207141Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a9763365-4175-46bc-bc62-c52376c00ac1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.751941834Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.752575673Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.75266612Z" level=info msg="Ran pod sandbox f8083512327610bf91d51ab90fc881a84029d6e1cec422858a7b15223ba12951 with infra container: kube-system/kube-proxy-xrf7w/POD" id=e20830eb-d7ba-4d39-9684-c8ede54613de name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.753109916Z" level=info msg="Ran pod sandbox 8b698891d6b0d4737a381438ce348b9f7879822dbc4536a9e87146eb5e2f8a8c with infra container: kube-system/kindnet-fk7tk/POD" id=a9763365-4175-46bc-bc62-c52376c00ac1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.753769062Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eeaf9dea-ff0e-4016-a753-072cb563561a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754005852Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2da0045e-a039-4f72-b2d1-b6f2e50c9e07 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754637475Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8c5fb6a7-4135-4166-a8c8-d5b547141214 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754895277Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=25bd4ecc-d331-40d6-b8cc-b7c29e8bdb63 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755621841Z" level=info msg="Creating container: kube-system/kube-proxy-xrf7w/kube-proxy" id=21ca463e-3420-4bb9-a246-fba6c4eb0a8c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755742066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755803605Z" level=info msg="Creating container: kube-system/kindnet-fk7tk/kindnet-cni" id=dd3bed0f-268e-4faf-bed3-48dfa7664af7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.75587793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.760427942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762095195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762350591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762877306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.788017057Z" level=info msg="Created container bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52: kube-system/kindnet-fk7tk/kindnet-cni" id=dd3bed0f-268e-4faf-bed3-48dfa7664af7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.788645864Z" level=info msg="Starting container: bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52" id=5b92c65e-04f1-49dd-8a9b-09c7e5af08f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.790395469Z" level=info msg="Started container" PID=1044 containerID=bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52 description=kube-system/kindnet-fk7tk/kindnet-cni id=5b92c65e-04f1-49dd-8a9b-09c7e5af08f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b698891d6b0d4737a381438ce348b9f7879822dbc4536a9e87146eb5e2f8a8c
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.791205245Z" level=info msg="Created container cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91: kube-system/kube-proxy-xrf7w/kube-proxy" id=21ca463e-3420-4bb9-a246-fba6c4eb0a8c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.791782823Z" level=info msg="Starting container: cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91" id=719d587f-1d0f-44b3-b043-f67d66a15aca name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.794389034Z" level=info msg="Started container" PID=1043 containerID=cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91 description=kube-system/kube-proxy-xrf7w/kube-proxy id=719d587f-1d0f-44b3-b043-f67d66a15aca name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8083512327610bf91d51ab90fc881a84029d6e1cec422858a7b15223ba12951
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc09c8ed5007b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   3 seconds ago       Running             kindnet-cni               1                   8b698891d6b0d       kindnet-fk7tk                               kube-system
	cc9c2f6d17b30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   3 seconds ago       Running             kube-proxy                1                   f808351232761       kube-proxy-xrf7w                            kube-system
	79e07ac2fc3d3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   7 seconds ago       Running             kube-apiserver            1                   e46973f5bc07d       kube-apiserver-newest-cni-620528            kube-system
	0ae6c102b337e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   7 seconds ago       Running             etcd                      1                   da1bdaf3b5d20       etcd-newest-cni-620528                      kube-system
	760841a0a23f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   7 seconds ago       Running             kube-controller-manager   1                   e74e52bb72ce7       kube-controller-manager-newest-cni-620528   kube-system
	b07e735128b3d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   7 seconds ago       Running             kube-scheduler            1                   d60ffd76c3c4d       kube-scheduler-newest-cni-620528            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-620528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-620528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-620528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:18:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-620528
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-620528
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fd9cdc5f-2e20-41a6-aefd-53097190daa1
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-620528                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         25s
	  kube-system                 kindnet-fk7tk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      19s
	  kube-system                 kube-apiserver-newest-cni-620528             250m (3%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-controller-manager-newest-cni-620528    200m (2%)     0 (0%)      0 (0%)           0 (0%)         25s
	  kube-system                 kube-proxy-xrf7w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         19s
	  kube-system                 kube-scheduler-newest-cni-620528             100m (1%)     0 (0%)      0 (0%)           0 (0%)         25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 18s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 30s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  30s (x8 over 30s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    30s (x8 over 30s)  kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     30s (x8 over 30s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    25s                kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  25s                kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     25s                kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  Starting                 25s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           20s                node-controller  Node newest-cni-620528 event: Registered Node newest-cni-620528 in Controller
	  Normal  Starting                 7s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7s (x8 over 7s)    kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7s (x8 over 7s)    kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7s (x8 over 7s)    kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2s                 node-controller  Node newest-cni-620528 event: Registered Node newest-cni-620528 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea] <==
	{"level":"warn","ts":"2025-11-08T09:18:39.914020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.920718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.929275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.935417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.941264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.947139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.956435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.969880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.975754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.986431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.992415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.998503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.004477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.010221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.016272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.022965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.028883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.034975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.041023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.047156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.053235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.073708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.080689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.086515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.135310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:45 up  1:01,  0 user,  load average: 4.45, 4.03, 2.69
	Linux newest-cni-620528 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52] <==
	I1108 09:18:41.935432       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:18:41.935649       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:18:41.935784       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:18:41.935802       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:18:41.935828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:18:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:18:42.232265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:18:42.232388       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:18:42.232400       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:18:42.232562       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:18:42.632776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:18:42.632802       1 metrics.go:72] Registering metrics
	I1108 09:18:42.632866       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3] <==
	I1108 09:18:40.598808       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:18:40.598807       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:18:40.598813       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:18:40.598951       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:18:40.598869       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:18:40.598897       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:18:40.599106       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:18:40.599189       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:18:40.604455       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 09:18:40.605230       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:18:40.628186       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:18:40.635877       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:18:40.635918       1 policy_source.go:240] refreshing policies
	I1108 09:18:40.649323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:18:40.869123       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:18:40.895326       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:18:40.912696       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:18:40.921002       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:18:40.927854       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:18:40.963353       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.111.178"}
	I1108 09:18:40.973367       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.138.181"}
	I1108 09:18:41.502325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:18:43.928443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:18:44.378123       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:18:44.428238       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969] <==
	I1108 09:18:43.924981       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:18:43.925038       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:18:43.925055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:18:43.925083       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:18:43.925119       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:18:43.925164       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:18:43.925262       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:18:43.925468       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:18:43.926537       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:18:43.926623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:18:43.926658       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:18:43.926756       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:18:43.926768       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-620528"
	I1108 09:18:43.926822       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:18:43.929974       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:18:43.931188       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:18:43.931362       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:18:43.933665       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:18:43.936473       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:18:43.937647       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:18:43.941866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:18:43.943044       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:18:43.945322       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:18:43.946554       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:18:43.953941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91] <==
	I1108 09:18:41.828154       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:18:41.883168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:18:41.983505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:18:41.983547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1108 09:18:41.983650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:18:42.002615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:18:42.002684       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:18:42.008027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:18:42.008613       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:18:42.008654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:18:42.011483       1 config.go:200] "Starting service config controller"
	I1108 09:18:42.011505       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:18:42.011507       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:18:42.011514       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:18:42.011494       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:18:42.011535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:18:42.011641       1 config.go:309] "Starting node config controller"
	I1108 09:18:42.011660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:18:42.011667       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:18:42.112146       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:18:42.112209       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:18:42.112242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428] <==
	I1108 09:18:39.260154       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:18:40.529823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:18:40.529861       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:18:40.529874       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:18:40.529883       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:18:40.553875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:18:40.553911       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:18:40.557632       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:18:40.557659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:18:40.558085       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:18:40.558498       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:18:40.658841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.173925     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.174079     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.174234     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.540716     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.654644     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-620528\" already exists" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.654692     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.660814     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-620528\" already exists" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.660851     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.666900     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-620528\" already exists" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.666933     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.674704     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-620528\" already exists" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737221     669 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737353     669 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737393     669 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.738235     669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.137357     669 apiserver.go:52] "Watching apiserver"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.240353     669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340222     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-xtables-lock\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340269     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-cni-cfg\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340335     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-lib-modules\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340369     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-xtables-lock\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340391     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-lib-modules\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-620528 -n newest-cni-620528: exit status 2 (315.704116ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-620528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq: exit status 1 (58.50174ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7fndk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l5b8k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-n9rrq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:223: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: network settings <======
helpers_test.go:230: HOST ENV snapshots: PROXY env: HTTP_PROXY="<empty>" HTTPS_PROXY="<empty>" NO_PROXY="<empty>"
helpers_test.go:238: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: docker inspect <======
helpers_test.go:239: (dbg) Run:  docker inspect newest-cni-620528
helpers_test.go:243: (dbg) docker inspect newest-cni-620528:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	        "Created": "2025-11-08T09:18:04.364605976Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 334564,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2025-11-08T09:18:32.184543618Z",
	            "FinishedAt": "2025-11-08T09:18:31.353088595Z"
	        },
	        "Image": "sha256:52e9213f5e236fd5a6d1e2efda5bc29db9474154d6b4d361eae03a0a8882d9e2",
	        "ResolvConfPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hostname",
	        "HostsPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/hosts",
	        "LogPath": "/var/lib/docker/containers/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9/e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9-json.log",
	        "Name": "/newest-cni-620528",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "newest-cni-620528:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "newest-cni-620528",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "private",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 3221225472,
	            "NanoCpus": 0,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 6442450944,
	            "MemorySwappiness": null,
	            "OomKillDisable": null,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "ID": "e2bd4d8f6d3f72b123c1b06a57f0011330ff756d0971e978bfc1bbd1cf0825d9",
	                "LowerDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9-init/diff:/var/lib/docker/overlay2/aaaebdfd5257b46c230238103264963ca2c9711143bdb31d545f41920a638488/diff",
	                "MergedDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/327f93152e1cebff7753b7e141966551ac24c7173e79221d8dcad682d2ba1ca9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "newest-cni-620528",
	                "Source": "/var/lib/docker/volumes/newest-cni-620528/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "newest-cni-620528",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "newest-cni-620528",
	                "name.minikube.sigs.k8s.io": "newest-cni-620528",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "152941532aea24d70365f6c670e3d1c6393c84b8eb777a1468fdf6172d3a5f17",
	            "SandboxKey": "/var/run/docker/netns/152941532aea",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33134"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33135"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33138"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33136"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33137"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "newest-cni-620528": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.103.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "56:55:bb:85:24:ae",
	                    "DriverOpts": null,
	                    "GwPriority": 0,
	                    "NetworkID": "92c9b3581086a3ec71939baea725cf0a225bd4e6d308483c2f50dd74f662a243",
	                    "EndpointID": "41066b804fabe43a192113f88da1e693b1eb84f71dd2001248d5a753cbac8fb8",
	                    "Gateway": "192.168.103.1",
	                    "IPAddress": "192.168.103.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "newest-cni-620528",
	                        "e2bd4d8f6d3f"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528: exit status 2 (308.929751ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:247: status error: exit status 2 (may be ok)
helpers_test.go:252: <<< TestStartStop/group/newest-cni/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:253: ======>  post-mortem[TestStartStop/group/newest-cni/serial/Pause]: minikube logs <======
helpers_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-620528 logs -n 25
helpers_test.go:260: TestStartStop/group/newest-cni/serial/Pause logs: 
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────────────┬─────────┬─────────┬─────────────────────┬──────────────
───────┐
	│ COMMAND │                                                                                                                     ARGS                                                                                                                      │           PROFILE            │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────────────┼─────────┼─────────┼─────────────────────┼──────────────
───────┤
	│ addons  │ enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                       │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1                                                                      │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ old-k8s-version-339286 image list --format=json                                                                                                                                                                                               │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p old-k8s-version-339286 --alsologtostderr -v=1                                                                                                                                                                                              │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ image   │ no-preload-220714 image list --format=json                                                                                                                                                                                                    │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p no-preload-220714 --alsologtostderr -v=1                                                                                                                                                                                                   │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ image   │ embed-certs-271910 image list --format=json                                                                                                                                                                                                   │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ pause   │ -p embed-certs-271910 --alsologtostderr -v=1                                                                                                                                                                                                  │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │                     │
	│ delete  │ -p old-k8s-version-339286                                                                                                                                                                                                                     │ old-k8s-version-339286       │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:17 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:17 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p no-preload-220714                                                                                                                                                                                                                          │ no-preload-220714            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p embed-certs-271910                                                                                                                                                                                                                         │ embed-certs-271910           │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ default-k8s-diff-port-677902 image list --format=json                                                                                                                                                                                         │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p default-k8s-diff-port-677902 --alsologtostderr -v=1                                                                                                                                                                                        │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ addons  │ enable metrics-server -p newest-cni-620528 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain                                                                                                       │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	│ stop    │ -p newest-cni-620528 --alsologtostderr -v=3                                                                                                                                                                                                   │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-677902                                                                                                                                                                                                               │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ addons  │ enable dashboard -p newest-cni-620528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4                                                                                                                                                  │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ start   │ -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1 │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ delete  │ -p default-k8s-diff-port-677902                                                                                                                                                                                                               │ default-k8s-diff-port-677902 │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ image   │ newest-cni-620528 image list --format=json                                                                                                                                                                                                    │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │ 08 Nov 25 09:18 UTC │
	│ pause   │ -p newest-cni-620528 --alsologtostderr -v=1                                                                                                                                                                                                   │ newest-cni-620528            │ jenkins │ v1.37.0 │ 08 Nov 25 09:18 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────────────┴─────────┴─────────┴─────────────────────┴──────────────
───────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 09:18:31
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 09:18:31.953826  334359 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:18:31.954048  334359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:31.954056  334359 out.go:374] Setting ErrFile to fd 2...
	I1108 09:18:31.954060  334359 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:18:31.954271  334359 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:18:31.954704  334359 out.go:368] Setting JSON to false
	I1108 09:18:31.955653  334359 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3663,"bootTime":1762589849,"procs":235,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:18:31.955737  334359 start.go:143] virtualization: kvm guest
	I1108 09:18:31.957774  334359 out.go:179] * [newest-cni-620528] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:18:31.959088  334359 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:18:31.959111  334359 notify.go:221] Checking for updates...
	I1108 09:18:31.961930  334359 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:18:31.963381  334359 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:31.964619  334359 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:18:31.965870  334359 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:18:31.967135  334359 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:18:31.968759  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:31.969172  334359 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:18:31.993139  334359 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:18:31.993260  334359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:32.049546  334359 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-08 09:18:32.039509341 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:32.049695  334359 docker.go:319] overlay module found
	I1108 09:18:32.052141  334359 out.go:179] * Using the docker driver based on existing profile
	I1108 09:18:32.053364  334359 start.go:309] selected driver: docker
	I1108 09:18:32.053378  334359 start.go:930] validating driver "docker" against &{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:32.053456  334359 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:18:32.054046  334359 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:18:32.111861  334359 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:2 ContainersRunning:1 ContainersPaused:0 ContainersStopped:1 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:40 OomKillDisable:false NGoroutines:55 SystemTime:2025-11-08 09:18:32.102294877 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:18:32.112146  334359 start_flags.go:1011] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:32.112172  334359 cni.go:84] Creating CNI manager for ""
	I1108 09:18:32.112216  334359 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:32.112247  334359 start.go:353] cluster config:
	{Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docke
r BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:32.114073  334359 out.go:179] * Starting "newest-cni-620528" primary control-plane node in "newest-cni-620528" cluster
	I1108 09:18:32.115399  334359 cache.go:124] Beginning downloading kic base image for docker with crio
	I1108 09:18:32.116707  334359 out.go:179] * Pulling base image v0.0.48-1761985721-21837 ...
	I1108 09:18:32.117968  334359 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:32.117998  334359 image.go:81] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon
	I1108 09:18:32.118013  334359 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
	I1108 09:18:32.118038  334359 cache.go:59] Caching tarball of preloaded images
	I1108 09:18:32.118164  334359 preload.go:233] Found /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I1108 09:18:32.118178  334359 cache.go:62] Finished verifying existence of preloaded tar for v1.34.1 on crio
	I1108 09:18:32.118356  334359 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:32.138662  334359 image.go:100] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 in local docker daemon, skipping pull
	I1108 09:18:32.138688  334359 cache.go:148] gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 exists in daemon, skipping load
	I1108 09:18:32.138703  334359 cache.go:233] Successfully downloaded all kic artifacts
	I1108 09:18:32.138730  334359 start.go:360] acquireMachinesLock for newest-cni-620528: {Name:mk40f88afe49598e6bed4e0d325b5b35b68ac310 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1108 09:18:32.138796  334359 start.go:364] duration metric: took 44.667µs to acquireMachinesLock for "newest-cni-620528"
	I1108 09:18:32.138817  334359 start.go:96] Skipping create...Using existing machine configuration
	I1108 09:18:32.138823  334359 fix.go:54] fixHost starting: 
	I1108 09:18:32.139093  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:32.156629  334359 fix.go:112] recreateIfNeeded on newest-cni-620528: state=Stopped err=<nil>
	W1108 09:18:32.156657  334359 fix.go:138] unexpected machine state, will restart: <nil>
	I1108 09:18:32.158610  334359 out.go:252] * Restarting existing docker container for "newest-cni-620528" ...
	I1108 09:18:32.158677  334359 cli_runner.go:164] Run: docker start newest-cni-620528
	I1108 09:18:32.438537  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:32.461107  334359 kic.go:430] container "newest-cni-620528" state is running.
	I1108 09:18:32.461556  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:32.482947  334359 profile.go:143] Saving config to /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/config.json ...
	I1108 09:18:32.483170  334359 machine.go:94] provisionDockerMachine start ...
	I1108 09:18:32.483235  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:32.503044  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:32.503357  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:32.503373  334359 main.go:143] libmachine: About to run SSH command:
	hostname
	I1108 09:18:32.503937  334359 main.go:143] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:52836->127.0.0.1:33134: read: connection reset by peer
	I1108 09:18:35.632305  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:35.632364  334359 ubuntu.go:182] provisioning hostname "newest-cni-620528"
	I1108 09:18:35.632433  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:35.652178  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:35.652420  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:35.652443  334359 main.go:143] libmachine: About to run SSH command:
	sudo hostname newest-cni-620528 && echo "newest-cni-620528" | sudo tee /etc/hostname
	I1108 09:18:35.789044  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: newest-cni-620528
	
	I1108 09:18:35.789134  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:35.807870  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:35.808132  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:35.808151  334359 main.go:143] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-620528' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-620528/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-620528' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1108 09:18:35.934984  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	I1108 09:18:35.935010  334359 ubuntu.go:188] set auth options {CertDir:/home/jenkins/minikube-integration/21866-5860/.minikube CaCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/21866-5860/.minikube}
	I1108 09:18:35.935037  334359 ubuntu.go:190] setting up certificates
	I1108 09:18:35.935074  334359 provision.go:84] configureAuth start
	I1108 09:18:35.935126  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:35.953694  334359 provision.go:143] copyHostCerts
	I1108 09:18:35.953748  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem, removing ...
	I1108 09:18:35.953766  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem
	I1108 09:18:35.953829  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/ca.pem (1082 bytes)
	I1108 09:18:35.953961  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem, removing ...
	I1108 09:18:35.953974  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem
	I1108 09:18:35.954006  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/cert.pem (1123 bytes)
	I1108 09:18:35.954064  334359 exec_runner.go:144] found /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem, removing ...
	I1108 09:18:35.954072  334359 exec_runner.go:203] rm: /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem
	I1108 09:18:35.954094  334359 exec_runner.go:151] cp: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/21866-5860/.minikube/key.pem (1675 bytes)
	I1108 09:18:35.954151  334359 provision.go:117] generating server cert: /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem org=jenkins.newest-cni-620528 san=[127.0.0.1 192.168.103.2 localhost minikube newest-cni-620528]
	I1108 09:18:36.080750  334359 provision.go:177] copyRemoteCerts
	I1108 09:18:36.080811  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1108 09:18:36.080844  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.099244  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.192779  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I1108 09:18:36.209789  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1108 09:18:36.226539  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1108 09:18:36.243134  334359 provision.go:87] duration metric: took 308.049591ms to configureAuth
	I1108 09:18:36.243164  334359 ubuntu.go:206] setting minikube options for container-runtime
	I1108 09:18:36.243376  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:36.243513  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.262092  334359 main.go:143] libmachine: Using SSH client type: native
	I1108 09:18:36.262377  334359 main.go:143] libmachine: &{{{<nil> 0 [] [] []} docker [0x8416e0] 0x8443c0 <nil>  [] 0s} 127.0.0.1 33134 <nil> <nil>}
	I1108 09:18:36.262400  334359 main.go:143] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1108 09:18:36.510057  334359 main.go:143] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1108 09:18:36.510082  334359 machine.go:97] duration metric: took 4.026899157s to provisionDockerMachine
	I1108 09:18:36.510095  334359 start.go:293] postStartSetup for "newest-cni-620528" (driver="docker")
	I1108 09:18:36.510108  334359 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1108 09:18:36.510175  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1108 09:18:36.510217  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.528769  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.621635  334359 ssh_runner.go:195] Run: cat /etc/os-release
	I1108 09:18:36.625056  334359 main.go:143] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1108 09:18:36.625080  334359 info.go:137] Remote host: Debian GNU/Linux 12 (bookworm)
	I1108 09:18:36.625090  334359 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/addons for local assets ...
	I1108 09:18:36.625172  334359 filesync.go:126] Scanning /home/jenkins/minikube-integration/21866-5860/.minikube/files for local assets ...
	I1108 09:18:36.625243  334359 filesync.go:149] local asset: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem -> 93692.pem in /etc/ssl/certs
	I1108 09:18:36.625377  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I1108 09:18:36.632681  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:36.649514  334359 start.go:296] duration metric: took 139.40288ms for postStartSetup
	I1108 09:18:36.649610  334359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:18:36.649648  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.667733  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.758494  334359 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1108 09:18:36.763276  334359 fix.go:56] duration metric: took 4.624446908s for fixHost
	I1108 09:18:36.763319  334359 start.go:83] releasing machines lock for "newest-cni-620528", held for 4.624510125s
	I1108 09:18:36.763383  334359 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" newest-cni-620528
	I1108 09:18:36.781602  334359 ssh_runner.go:195] Run: cat /version.json
	I1108 09:18:36.781652  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.781698  334359 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1108 09:18:36.781748  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:36.801220  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.801805  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:36.891347  334359 ssh_runner.go:195] Run: systemctl --version
	I1108 09:18:36.943300  334359 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1108 09:18:36.977988  334359 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W1108 09:18:36.982628  334359 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I1108 09:18:36.982679  334359 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1108 09:18:36.990136  334359 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I1108 09:18:36.990158  334359 start.go:496] detecting cgroup driver to use...
	I1108 09:18:36.990189  334359 detect.go:190] detected "systemd" cgroup driver on host os
	I1108 09:18:36.990229  334359 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1108 09:18:37.004070  334359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1108 09:18:37.016204  334359 docker.go:218] disabling cri-docker service (if available) ...
	I1108 09:18:37.016252  334359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1108 09:18:37.031042  334359 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1108 09:18:37.042796  334359 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1108 09:18:37.116169  334359 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1108 09:18:37.197068  334359 docker.go:234] disabling docker service ...
	I1108 09:18:37.197150  334359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1108 09:18:37.211457  334359 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1108 09:18:37.223640  334359 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1108 09:18:37.298267  334359 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1108 09:18:37.377160  334359 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1108 09:18:37.389141  334359 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1108 09:18:37.403403  334359 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10.1" pause image...
	I1108 09:18:37.403457  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.412409  334359 crio.go:70] configuring cri-o to use "systemd" as cgroup driver...
	I1108 09:18:37.412477  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "systemd"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.421158  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.429474  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.437775  334359 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1108 09:18:37.445932  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.454974  334359 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.463427  334359 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1108 09:18:37.472078  334359 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1108 09:18:37.479077  334359 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1108 09:18:37.486652  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:37.565514  334359 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1108 09:18:37.674157  334359 start.go:543] Will wait 60s for socket path /var/run/crio/crio.sock
	I1108 09:18:37.674225  334359 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1108 09:18:37.678270  334359 start.go:564] Will wait 60s for crictl version
	I1108 09:18:37.678349  334359 ssh_runner.go:195] Run: which crictl
	I1108 09:18:37.681747  334359 ssh_runner.go:195] Run: sudo /usr/local/bin/crictl version
	I1108 09:18:37.706627  334359 start.go:580] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.34.1
	RuntimeApiVersion:  v1
	I1108 09:18:37.706721  334359 ssh_runner.go:195] Run: crio --version
	I1108 09:18:37.734071  334359 ssh_runner.go:195] Run: crio --version
	I1108 09:18:37.764547  334359 out.go:179] * Preparing Kubernetes v1.34.1 on CRI-O 1.34.1 ...
	I1108 09:18:37.766137  334359 cli_runner.go:164] Run: docker network inspect newest-cni-620528 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1108 09:18:37.784399  334359 ssh_runner.go:195] Run: grep 192.168.103.1	host.minikube.internal$ /etc/hosts
	I1108 09:18:37.788528  334359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.103.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:37.800335  334359 out.go:179]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I1108 09:18:37.801624  334359 kubeadm.go:884] updating cluster {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize
:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1108 09:18:37.801765  334359 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
	I1108 09:18:37.801841  334359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:37.832474  334359 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:37.832495  334359 crio.go:433] Images already preloaded, skipping extraction
	I1108 09:18:37.832541  334359 ssh_runner.go:195] Run: sudo crictl images --output json
	I1108 09:18:37.857934  334359 crio.go:514] all images are preloaded for cri-o runtime.
	I1108 09:18:37.857955  334359 cache_images.go:86] Images are preloaded, skipping loading
	I1108 09:18:37.857962  334359 kubeadm.go:935] updating node { 192.168.103.2 8443 v1.34.1 crio true true} ...
	I1108 09:18:37.858055  334359 kubeadm.go:947] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.34.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=newest-cni-620528 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.103.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1108 09:18:37.858134  334359 ssh_runner.go:195] Run: crio config
	I1108 09:18:37.903187  334359 cni.go:84] Creating CNI manager for ""
	I1108 09:18:37.903211  334359 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1108 09:18:37.903228  334359 kubeadm.go:85] Using pod CIDR: 10.42.0.0/16
	I1108 09:18:37.903247  334359 kubeadm.go:190] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.103.2 APIServerPort:8443 KubernetesVersion:v1.34.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-620528 NodeName:newest-cni-620528 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.103.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.103.2 CgroupDriver:systemd ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1108 09:18:37.903372  334359 kubeadm.go:196] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.103.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-620528"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.103.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.103.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	kubernetesVersion: v1.34.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: systemd
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1108 09:18:37.903428  334359 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.34.1
	I1108 09:18:37.911588  334359 binaries.go:44] Found k8s binaries, skipping transfer
	I1108 09:18:37.911640  334359 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1108 09:18:37.919259  334359 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (368 bytes)
	I1108 09:18:37.931487  334359 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1108 09:18:37.943791  334359 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2214 bytes)
	I1108 09:18:37.955842  334359 ssh_runner.go:195] Run: grep 192.168.103.2	control-plane.minikube.internal$ /etc/hosts
	I1108 09:18:37.959448  334359 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.103.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1108 09:18:37.969421  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:38.048977  334359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:38.072616  334359 certs.go:69] Setting up /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528 for IP: 192.168.103.2
	I1108 09:18:38.072650  334359 certs.go:195] generating shared ca certs ...
	I1108 09:18:38.072673  334359 certs.go:227] acquiring lock for ca certs: {Name:mkdee6172808e8f269398fa0affda7e8ed82e2e3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.072837  334359 certs.go:236] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key
	I1108 09:18:38.072876  334359 certs.go:236] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key
	I1108 09:18:38.072885  334359 certs.go:257] generating profile certs ...
	I1108 09:18:38.072978  334359 certs.go:360] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/client.key
	I1108 09:18:38.073036  334359 certs.go:360] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key.88e29f34
	I1108 09:18:38.073085  334359 certs.go:360] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key
	I1108 09:18:38.073189  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem (1338 bytes)
	W1108 09:18:38.073218  334359 certs.go:480] ignoring /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369_empty.pem, impossibly tiny 0 bytes
	I1108 09:18:38.073227  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca-key.pem (1679 bytes)
	I1108 09:18:38.073248  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/ca.pem (1082 bytes)
	I1108 09:18:38.073270  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/cert.pem (1123 bytes)
	I1108 09:18:38.073326  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/certs/key.pem (1675 bytes)
	I1108 09:18:38.073374  334359 certs.go:484] found cert: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem (1708 bytes)
	I1108 09:18:38.073971  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1108 09:18:38.092677  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I1108 09:18:38.110876  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1108 09:18:38.129782  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1108 09:18:38.151737  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I1108 09:18:38.169621  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I1108 09:18:38.186099  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1108 09:18:38.202890  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/newest-cni-620528/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1108 09:18:38.219921  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/ssl/certs/93692.pem --> /usr/share/ca-certificates/93692.pem (1708 bytes)
	I1108 09:18:38.236803  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1108 09:18:38.253736  334359 ssh_runner.go:362] scp /home/jenkins/minikube-integration/21866-5860/.minikube/certs/9369.pem --> /usr/share/ca-certificates/9369.pem (1338 bytes)
	I1108 09:18:38.271696  334359 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1108 09:18:38.283947  334359 ssh_runner.go:195] Run: openssl version
	I1108 09:18:38.290131  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/93692.pem && ln -fs /usr/share/ca-certificates/93692.pem /etc/ssl/certs/93692.pem"
	I1108 09:18:38.298700  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.302484  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Nov  8 08:35 /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.302538  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/93692.pem
	I1108 09:18:38.336062  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/93692.pem /etc/ssl/certs/3ec20f2e.0"
	I1108 09:18:38.344338  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1108 09:18:38.352566  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.356110  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Nov  8 08:29 /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.356166  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1108 09:18:38.389582  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1108 09:18:38.397744  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9369.pem && ln -fs /usr/share/ca-certificates/9369.pem /etc/ssl/certs/9369.pem"
	I1108 09:18:38.406339  334359 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.409982  334359 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Nov  8 08:35 /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.410038  334359 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9369.pem
	I1108 09:18:38.445707  334359 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9369.pem /etc/ssl/certs/51391683.0"
	I1108 09:18:38.454145  334359 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1108 09:18:38.458313  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I1108 09:18:38.492065  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I1108 09:18:38.526048  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I1108 09:18:38.561206  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I1108 09:18:38.603651  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I1108 09:18:38.651170  334359 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I1108 09:18:38.695091  334359 kubeadm.go:401] StartCluster: {Name:newest-cni-620528 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:3072 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:newest-cni-620528 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:26
2144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 09:18:38.695189  334359 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1108 09:18:38.695259  334359 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1108 09:18:38.734650  334359 cri.go:89] found id: "79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3"
	I1108 09:18:38.734672  334359 cri.go:89] found id: "0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea"
	I1108 09:18:38.734676  334359 cri.go:89] found id: "760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969"
	I1108 09:18:38.734679  334359 cri.go:89] found id: "b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428"
	I1108 09:18:38.734682  334359 cri.go:89] found id: ""
	I1108 09:18:38.734721  334359 ssh_runner.go:195] Run: sudo runc list -f json
	W1108 09:18:38.747269  334359 kubeadm.go:408] unpause failed: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T09:18:38Z" level=error msg="open /run/runc: no such file or directory"
	I1108 09:18:38.747371  334359 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1108 09:18:38.755122  334359 kubeadm.go:417] found existing configuration files, will attempt cluster restart
	I1108 09:18:38.755140  334359 kubeadm.go:598] restartPrimaryControlPlane start ...
	I1108 09:18:38.755186  334359 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I1108 09:18:38.762890  334359 kubeadm.go:131] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:18:38.763314  334359 kubeconfig.go:47] verify endpoint returned: get endpoint: "newest-cni-620528" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:38.763450  334359 kubeconfig.go:62] /home/jenkins/minikube-integration/21866-5860/kubeconfig needs updating (will repair): [kubeconfig missing "newest-cni-620528" cluster setting kubeconfig missing "newest-cni-620528" context setting]
	I1108 09:18:38.763793  334359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.764931  334359 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I1108 09:18:38.773493  334359 kubeadm.go:635] The running cluster does not require reconfiguration: 192.168.103.2
	I1108 09:18:38.773517  334359 kubeadm.go:602] duration metric: took 18.371472ms to restartPrimaryControlPlane
	I1108 09:18:38.773525  334359 kubeadm.go:403] duration metric: took 78.447318ms to StartCluster
	I1108 09:18:38.773540  334359 settings.go:142] acquiring lock: {Name:mk83d7b2c3aae248a3008847bbe385f6bcbb3eb0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.773604  334359 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:18:38.774148  334359 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/21866-5860/kubeconfig: {Name:mk032c63ceda0acf1aacd7930d9822aad6a1e783 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1108 09:18:38.774411  334359 start.go:236] Will wait 6m0s for node &{Name: IP:192.168.103.2 Port:8443 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1108 09:18:38.774487  334359 addons.go:512] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubetail:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I1108 09:18:38.774571  334359 addons.go:70] Setting storage-provisioner=true in profile "newest-cni-620528"
	I1108 09:18:38.774590  334359 addons.go:239] Setting addon storage-provisioner=true in "newest-cni-620528"
	W1108 09:18:38.774598  334359 addons.go:248] addon storage-provisioner should already be in state true
	I1108 09:18:38.774627  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.774612  334359 addons.go:70] Setting dashboard=true in profile "newest-cni-620528"
	I1108 09:18:38.774635  334359 addons.go:70] Setting default-storageclass=true in profile "newest-cni-620528"
	I1108 09:18:38.774662  334359 addons.go:239] Setting addon dashboard=true in "newest-cni-620528"
	I1108 09:18:38.774667  334359 addons_storage_classes.go:34] enableOrDisableStorageClasses default-storageclass=true on "newest-cni-620528"
	W1108 09:18:38.774671  334359 addons.go:248] addon dashboard should already be in state true
	I1108 09:18:38.774698  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.774708  334359 config.go:182] Loaded profile config "newest-cni-620528": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:18:38.774988  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.775128  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.775155  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.777872  334359 out.go:179] * Verifying Kubernetes components...
	I1108 09:18:38.779377  334359 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1108 09:18:38.800691  334359 addons.go:239] Setting addon default-storageclass=true in "newest-cni-620528"
	W1108 09:18:38.800716  334359 addons.go:248] addon default-storageclass should already be in state true
	I1108 09:18:38.800748  334359 host.go:66] Checking if "newest-cni-620528" exists ...
	I1108 09:18:38.801241  334359 cli_runner.go:164] Run: docker container inspect newest-cni-620528 --format={{.State.Status}}
	I1108 09:18:38.805451  334359 out.go:179]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1108 09:18:38.806417  334359 out.go:179]   - Using image docker.io/kubernetesui/dashboard:v2.7.0
	I1108 09:18:38.807401  334359 addons.go:436] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:38.807449  334359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1108 09:18:38.807507  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.809476  334359 out.go:179]   - Using image registry.k8s.io/echoserver:1.4
	I1108 09:18:38.810654  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-ns.yaml
	I1108 09:18:38.810687  334359 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes)
	I1108 09:18:38.810758  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.840631  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.841954  334359 addons.go:436] installing /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:38.842012  334359 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1108 09:18:38.842073  334359 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" newest-cni-620528
	I1108 09:18:38.846258  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.866362  334359 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/newest-cni-620528/id_rsa Username:docker}
	I1108 09:18:38.916091  334359 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1108 09:18:38.931693  334359 api_server.go:52] waiting for apiserver process to appear ...
	I1108 09:18:38.931764  334359 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:18:38.945969  334359 api_server.go:72] duration metric: took 171.524429ms to wait for apiserver process to appear ...
	I1108 09:18:38.945993  334359 api_server.go:88] waiting for apiserver healthz status ...
	I1108 09:18:38.946012  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:38.952649  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml
	I1108 09:18:38.952674  334359 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes)
	I1108 09:18:38.958048  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1108 09:18:38.966551  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml
	I1108 09:18:38.966577  334359 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes)
	I1108 09:18:38.980914  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1108 09:18:38.983121  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-configmap.yaml
	I1108 09:18:38.983146  334359 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes)
	I1108 09:18:38.999186  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-dp.yaml
	I1108 09:18:38.999208  334359 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4201 bytes)
	I1108 09:18:39.016587  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-role.yaml
	I1108 09:18:39.016614  334359 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes)
	I1108 09:18:39.033445  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml
	I1108 09:18:39.033469  334359 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes)
	I1108 09:18:39.049746  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-sa.yaml
	I1108 09:18:39.049767  334359 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes)
	I1108 09:18:39.062258  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-secret.yaml
	I1108 09:18:39.062297  334359 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes)
	I1108 09:18:39.074544  334359 addons.go:436] installing /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:18:39.074569  334359 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes)
	I1108 09:18:39.087187  334359 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml
	I1108 09:18:40.531705  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W1108 09:18:40.531742  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I1108 09:18:40.531759  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:40.549663  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	W1108 09:18:40.549717  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\": RBAC: clusterrole.rbac.authorization.k8s.io \"system:public-info-viewer\" not found","reason":"Forbidden","details":{},"code":403}
	I1108 09:18:40.946086  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:40.950536  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:18:40.950563  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:18:41.069056  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.088094299s)
	I1108 09:18:41.069683  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (1.98245219s)
	I1108 09:18:41.069770  334359 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.34.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.111687076s)
	I1108 09:18:41.071523  334359 out.go:179] * Some dashboard features require the metrics-server addon. To enable all features please run:
	
		minikube -p newest-cni-620528 addons enable metrics-server
	
	I1108 09:18:41.080770  334359 out.go:179] * Enabled addons: storage-provisioner, dashboard, default-storageclass
	I1108 09:18:41.082116  334359 addons.go:515] duration metric: took 2.307638098s for enable addons: enabled=[storage-provisioner dashboard default-storageclass]
	I1108 09:18:41.447110  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:41.451345  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W1108 09:18:41.451368  334359 api_server.go:103] status: https://192.168.103.2:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-kubernetes-service-cidr-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I1108 09:18:41.946978  334359 api_server.go:253] Checking apiserver healthz at https://192.168.103.2:8443/healthz ...
	I1108 09:18:41.951154  334359 api_server.go:279] https://192.168.103.2:8443/healthz returned 200:
	ok
	I1108 09:18:41.952165  334359 api_server.go:141] control plane version: v1.34.1
	I1108 09:18:41.952195  334359 api_server.go:131] duration metric: took 3.006194674s to wait for apiserver health ...
	I1108 09:18:41.952206  334359 system_pods.go:43] waiting for kube-system pods to appear ...
	I1108 09:18:41.955755  334359 system_pods.go:59] 8 kube-system pods found
	I1108 09:18:41.955789  334359 system_pods.go:61] "coredns-66bc5c9577-7fndk" [ee377f7d-6e12-40b3-9257-b0558cadc023] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:41.955799  334359 system_pods.go:61] "etcd-newest-cni-620528" [d267a844-8f28-4d49-a9a3-f19643f494fe] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I1108 09:18:41.955810  334359 system_pods.go:61] "kindnet-fk7tk" [8240271d-256f-4fde-82b4-0c071eb000b6] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I1108 09:18:41.955816  334359 system_pods.go:61] "kube-apiserver-newest-cni-620528" [a9d10205-e74b-49a0-ab30-fc4274b6c40a] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I1108 09:18:41.955825  334359 system_pods.go:61] "kube-controller-manager-newest-cni-620528" [5ca73710-f538-4265-a4f3-fe797f8e0cfd] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I1108 09:18:41.955835  334359 system_pods.go:61] "kube-proxy-xrf7w" [ef13acfb-b7b4-4aba-8145-f2ce94813f8e] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I1108 09:18:41.955843  334359 system_pods.go:61] "kube-scheduler-newest-cni-620528" [6dd7feec-3ba2-40c2-b761-3aa6855cf4f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I1108 09:18:41.955849  334359 system_pods.go:61] "storage-provisioner" [4e2975a8-6a90-42a4-b1bb-b425b99ad8be] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. no new claims to deallocate, preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I1108 09:18:41.955857  334359 system_pods.go:74] duration metric: took 3.644129ms to wait for pod list to return data ...
	I1108 09:18:41.955864  334359 default_sa.go:34] waiting for default service account to be created ...
	I1108 09:18:41.958098  334359 default_sa.go:45] found service account: "default"
	I1108 09:18:41.958116  334359 default_sa.go:55] duration metric: took 2.246753ms for default service account to be created ...
	I1108 09:18:41.958126  334359 kubeadm.go:587] duration metric: took 3.183687884s to wait for: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I1108 09:18:41.958150  334359 node_conditions.go:102] verifying NodePressure condition ...
	I1108 09:18:41.960411  334359 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I1108 09:18:41.960432  334359 node_conditions.go:123] node cpu capacity is 8
	I1108 09:18:41.960445  334359 node_conditions.go:105] duration metric: took 2.291276ms to run NodePressure ...
	I1108 09:18:41.960455  334359 start.go:242] waiting for startup goroutines ...
	I1108 09:18:41.960462  334359 start.go:247] waiting for cluster config update ...
	I1108 09:18:41.960472  334359 start.go:256] writing updated cluster config ...
	I1108 09:18:41.960711  334359 ssh_runner.go:195] Run: rm -f paused
	I1108 09:18:42.008417  334359 start.go:628] kubectl: 1.34.1, cluster: 1.34.1 (minor skew: 0)
	I1108 09:18:42.011648  334359 out.go:179] * Done! kubectl is now configured to use "newest-cni-620528" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.746641008Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.750523983Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=e20830eb-d7ba-4d39-9684-c8ede54613de name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.751207141Z" level=warning msg="Skipping invalid sysctl specified by config {net.ipv4.ip_unprivileged_port_start 0}: \"net.ipv4.ip_unprivileged_port_start\" not allowed with host net enabled" id=a9763365-4175-46bc-bc62-c52376c00ac1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.751941834Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.752575673Z" level=info msg="Not creating sandbox cgroup: sbParent is empty"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.75266612Z" level=info msg="Ran pod sandbox f8083512327610bf91d51ab90fc881a84029d6e1cec422858a7b15223ba12951 with infra container: kube-system/kube-proxy-xrf7w/POD" id=e20830eb-d7ba-4d39-9684-c8ede54613de name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.753109916Z" level=info msg="Ran pod sandbox 8b698891d6b0d4737a381438ce348b9f7879822dbc4536a9e87146eb5e2f8a8c with infra container: kube-system/kindnet-fk7tk/POD" id=a9763365-4175-46bc-bc62-c52376c00ac1 name=/runtime.v1.RuntimeService/RunPodSandbox
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.753769062Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=eeaf9dea-ff0e-4016-a753-072cb563561a name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754005852Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=2da0045e-a039-4f72-b2d1-b6f2e50c9e07 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754637475Z" level=info msg="Checking image status: registry.k8s.io/kube-proxy:v1.34.1" id=8c5fb6a7-4135-4166-a8c8-d5b547141214 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.754895277Z" level=info msg="Checking image status: docker.io/kindest/kindnetd:v20250512-df8de77b" id=25bd4ecc-d331-40d6-b8cc-b7c29e8bdb63 name=/runtime.v1.ImageService/ImageStatus
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755621841Z" level=info msg="Creating container: kube-system/kube-proxy-xrf7w/kube-proxy" id=21ca463e-3420-4bb9-a246-fba6c4eb0a8c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755742066Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.755803605Z" level=info msg="Creating container: kube-system/kindnet-fk7tk/kindnet-cni" id=dd3bed0f-268e-4faf-bed3-48dfa7664af7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.75587793Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.760427942Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762095195Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762350591Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.762877306Z" level=info msg="Allowed annotations are specified for workload [io.containers.trace-syscall]"
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.788017057Z" level=info msg="Created container bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52: kube-system/kindnet-fk7tk/kindnet-cni" id=dd3bed0f-268e-4faf-bed3-48dfa7664af7 name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.788645864Z" level=info msg="Starting container: bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52" id=5b92c65e-04f1-49dd-8a9b-09c7e5af08f5 name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.790395469Z" level=info msg="Started container" PID=1044 containerID=bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52 description=kube-system/kindnet-fk7tk/kindnet-cni id=5b92c65e-04f1-49dd-8a9b-09c7e5af08f5 name=/runtime.v1.RuntimeService/StartContainer sandboxID=8b698891d6b0d4737a381438ce348b9f7879822dbc4536a9e87146eb5e2f8a8c
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.791205245Z" level=info msg="Created container cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91: kube-system/kube-proxy-xrf7w/kube-proxy" id=21ca463e-3420-4bb9-a246-fba6c4eb0a8c name=/runtime.v1.RuntimeService/CreateContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.791782823Z" level=info msg="Starting container: cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91" id=719d587f-1d0f-44b3-b043-f67d66a15aca name=/runtime.v1.RuntimeService/StartContainer
	Nov 08 09:18:41 newest-cni-620528 crio[519]: time="2025-11-08T09:18:41.794389034Z" level=info msg="Started container" PID=1043 containerID=cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91 description=kube-system/kube-proxy-xrf7w/kube-proxy id=719d587f-1d0f-44b3-b043-f67d66a15aca name=/runtime.v1.RuntimeService/StartContainer sandboxID=f8083512327610bf91d51ab90fc881a84029d6e1cec422858a7b15223ba12951
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD                                         NAMESPACE
	bc09c8ed5007b       409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c   5 seconds ago       Running             kindnet-cni               1                   8b698891d6b0d       kindnet-fk7tk                               kube-system
	cc9c2f6d17b30       fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7   5 seconds ago       Running             kube-proxy                1                   f808351232761       kube-proxy-xrf7w                            kube-system
	79e07ac2fc3d3       c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97   8 seconds ago       Running             kube-apiserver            1                   e46973f5bc07d       kube-apiserver-newest-cni-620528            kube-system
	0ae6c102b337e       5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115   8 seconds ago       Running             etcd                      1                   da1bdaf3b5d20       etcd-newest-cni-620528                      kube-system
	760841a0a23f5       c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f   8 seconds ago       Running             kube-controller-manager   1                   e74e52bb72ce7       kube-controller-manager-newest-cni-620528   kube-system
	b07e735128b3d       7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813   8 seconds ago       Running             kube-scheduler            1                   d60ffd76c3c4d       kube-scheduler-newest-cni-620528            kube-system
	
	
	==> describe nodes <==
	Name:               newest-cni-620528
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=newest-cni-620528
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=e35d22c939988714b1b288802286ec2054941f36
	                    minikube.k8s.io/name=newest-cni-620528
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_11_08T09_18_21_0700
	                    minikube.k8s.io/version=v1.37.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Sat, 08 Nov 2025 09:18:18 +0000
	Taints:             node.kubernetes.io/not-ready:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  newest-cni-620528
	  AcquireTime:     <unset>
	  RenewTime:       Sat, 08 Nov 2025 09:18:40 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            False   Sat, 08 Nov 2025 09:18:40 +0000   Sat, 08 Nov 2025 09:18:15 +0000   KubeletNotReady              container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: plugin status uninitialized
	Addresses:
	  InternalIP:  192.168.103.2
	  Hostname:    newest-cni-620528
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32863356Ki
	  pods:               110
	System Info:
	  Machine ID:                 608131c53731cf9698d1f7346905c52d
	  System UUID:                fd9cdc5f-2e20-41a6-aefd-53097190daa1
	  Boot ID:                    df838595-98f8-4c2f-86f9-12748c0abf97
	  Kernel Version:             6.8.0-1043-gcp
	  OS Image:                   Debian GNU/Linux 12 (bookworm)
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.34.1
	  Kubelet Version:            v1.34.1
	  Kube-Proxy Version:         
	PodCIDR:                      10.42.0.0/24
	PodCIDRs:                     10.42.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-newest-cni-620528                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         27s
	  kube-system                 kindnet-fk7tk                                100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      21s
	  kube-system                 kube-apiserver-newest-cni-620528             250m (3%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-controller-manager-newest-cni-620528    200m (2%)     0 (0%)      0 (0%)           0 (0%)         27s
	  kube-system                 kube-proxy-xrf7w                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         21s
	  kube-system                 kube-scheduler-newest-cni-620528             100m (1%)     0 (0%)      0 (0%)           0 (0%)         27s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%)   100m (1%)
	  memory             150Mi (0%)  50Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 20s                kube-proxy       
	  Normal  Starting                 5s                 kube-proxy       
	  Normal  Starting                 32s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x8 over 32s)  kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    27s                kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  27s                kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     27s                kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  Starting                 27s                kubelet          Starting kubelet.
	  Normal  RegisteredNode           22s                node-controller  Node newest-cni-620528 event: Registered Node newest-cni-620528 in Controller
	  Normal  Starting                 9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9s (x8 over 9s)    kubelet          Node newest-cni-620528 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x8 over 9s)    kubelet          Node newest-cni-620528 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x8 over 9s)    kubelet          Node newest-cni-620528 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                 node-controller  Node newest-cni-620528 event: Registered Node newest-cni-620528 in Controller
	
	
	==> dmesg <==
	[  +0.000009] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +11.540350] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.993028] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca f8 6c 8a eb 2a 08 06
	[  +0.000424] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000004] ll header: 00000000: ff ff ff ff ff ff 82 86 65 69 46 a8 08 06
	[ +32.822882] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[Nov 8 09:15] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff 6e 5f f2 7c 73 f8 08 06
	[  +0.000414] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff de 92 de 39 8c 3d 08 06
	[  +0.050884] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000001] IPv4: martian source 10.244.0.1 from 10.244.0.2, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	[  -0.000001] ll header: 00000000: ff ff ff ff ff ff ea b4 ac 85 3d 63 08 06
	[  +5.699877] IPv4: martian source 10.244.0.1 from 10.244.0.3, on dev eth0
	[  +0.000008] ll header: 00000000: ff ff ff ff ff ff 76 b7 1a 57 04 20 08 06
	[  +0.000367] IPv4: martian source 10.244.0.3 from 10.244.0.2, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff e6 76 dd df d4 ff 08 06
	[ +41.378277] IPv4: martian source 10.244.0.1 from 10.244.0.4, on dev eth0
	[  +0.000007] ll header: 00000000: ff ff ff ff ff ff ca 12 2a 2a 38 49 08 06
	[  +0.000384] IPv4: martian source 10.244.0.4 from 10.244.0.3, on dev eth0
	[  +0.000006] ll header: 00000000: ff ff ff ff ff ff ea 3b 2b 9e 64 cf 08 06
	
	
	==> etcd [0ae6c102b337ed07e8e9ca0b478ad1f728f0204b1bb2f870a2fa36dfbf8418ea] <==
	{"level":"warn","ts":"2025-11-08T09:18:39.914020Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56550","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.920718Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56564","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.929275Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56588","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.935417Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56592","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.941264Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56612","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.947139Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56632","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.956435Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56646","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.969880Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56668","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.975754Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56676","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.986431Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56696","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.992415Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56720","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:39.998503Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56736","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.004477Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56744","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.010221Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56762","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.016272Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56794","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.022965Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56812","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.028883Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56832","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.034975Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56842","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.041023Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56868","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.047156Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56898","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.053235Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56916","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.073708Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56942","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.080689Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56972","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.086515Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:56982","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2025-11-08T09:18:40.135310Z","caller":"embed/config_logging.go:188","msg":"rejected connection on client endpoint","remote-addr":"127.0.0.1:57010","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 09:18:47 up  1:01,  0 user,  load average: 4.45, 4.03, 2.69
	Linux newest-cni-620528 6.8.0-1043-gcp #46~22.04.1-Ubuntu SMP Wed Oct 22 19:00:03 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
	
	
	==> kindnet [bc09c8ed5007b21473190911ce5ca6f3de2ade2f3b75d02230d04a2d05f09d52] <==
	I1108 09:18:41.935432       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1108 09:18:41.935649       1 main.go:139] hostIP = 192.168.103.2
	podIP = 192.168.103.2
	I1108 09:18:41.935784       1 main.go:148] setting mtu 1500 for CNI 
	I1108 09:18:41.935802       1 main.go:178] kindnetd IP family: "ipv4"
	I1108 09:18:41.935828       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	time="2025-11-08T09:18:42Z" level=info msg="Created plugin 10-kube-network-policies (kindnetd, handles RunPodSandbox,RemovePodSandbox)"
	I1108 09:18:42.232265       1 controller.go:377] "Starting controller" name="kube-network-policies"
	I1108 09:18:42.232388       1 controller.go:381] "Waiting for informer caches to sync"
	I1108 09:18:42.232400       1 shared_informer.go:350] "Waiting for caches to sync" controller="kube-network-policies"
	I1108 09:18:42.232562       1 controller.go:390] nri plugin exited: failed to connect to NRI service: dial unix /var/run/nri/nri.sock: connect: no such file or directory
	I1108 09:18:42.632776       1 shared_informer.go:357] "Caches are synced" controller="kube-network-policies"
	I1108 09:18:42.632802       1 metrics.go:72] Registering metrics
	I1108 09:18:42.632866       1 controller.go:711] "Syncing nftables rules"
	
	
	==> kube-apiserver [79e07ac2fc3d37dd6f1ca52e139cdddcefa7ebebb5800da3fe6681d75bbf53b3] <==
	I1108 09:18:40.598808       1 autoregister_controller.go:144] Starting autoregister controller
	I1108 09:18:40.598807       1 shared_informer.go:356] "Caches are synced" controller="kubernetes-service-cidr-controller"
	I1108 09:18:40.598813       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1108 09:18:40.598951       1 cache.go:39] Caches are synced for autoregister controller
	I1108 09:18:40.598869       1 default_servicecidr_controller.go:137] Shutting down kubernetes-service-cidr-controller
	I1108 09:18:40.598897       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1108 09:18:40.599106       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1108 09:18:40.599189       1 shared_informer.go:356] "Caches are synced" controller="ipallocator-repair-controller"
	I1108 09:18:40.604455       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	E1108 09:18:40.605230       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1108 09:18:40.628186       1 shared_informer.go:356] "Caches are synced" controller="node_authorizer"
	I1108 09:18:40.635877       1 shared_informer.go:356] "Caches are synced" controller="*generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]"
	I1108 09:18:40.635918       1 policy_source.go:240] refreshing policies
	I1108 09:18:40.649323       1 controller.go:667] quota admission added evaluator for: leases.coordination.k8s.io
	I1108 09:18:40.869123       1 controller.go:667] quota admission added evaluator for: namespaces
	I1108 09:18:40.895326       1 controller.go:667] quota admission added evaluator for: deployments.apps
	I1108 09:18:40.912696       1 controller.go:667] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1108 09:18:40.921002       1 controller.go:667] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1108 09:18:40.927854       1 controller.go:667] quota admission added evaluator for: serviceaccounts
	I1108 09:18:40.963353       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.106.111.178"}
	I1108 09:18:40.973367       1 alloc.go:328] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.138.181"}
	I1108 09:18:41.502325       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1108 09:18:43.928443       1 controller.go:667] quota admission added evaluator for: endpoints
	I1108 09:18:44.378123       1 controller.go:667] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1108 09:18:44.428238       1 controller.go:667] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [760841a0a23f5c38b579be61096e99f8c443ee96b4072d0f1c06b86506643969] <==
	I1108 09:18:43.924981       1 shared_informer.go:356] "Caches are synced" controller="bootstrap_signer"
	I1108 09:18:43.925038       1 shared_informer.go:356] "Caches are synced" controller="HPA"
	I1108 09:18:43.925055       1 shared_informer.go:356] "Caches are synced" controller="certificate-csrapproving"
	I1108 09:18:43.925083       1 shared_informer.go:356] "Caches are synced" controller="GC"
	I1108 09:18:43.925119       1 shared_informer.go:356] "Caches are synced" controller="VAC protection"
	I1108 09:18:43.925164       1 shared_informer.go:356] "Caches are synced" controller="validatingadmissionpolicy-status"
	I1108 09:18:43.925262       1 shared_informer.go:356] "Caches are synced" controller="persistent volume"
	I1108 09:18:43.925468       1 shared_informer.go:356] "Caches are synced" controller="TTL after finished"
	I1108 09:18:43.926537       1 shared_informer.go:356] "Caches are synced" controller="taint"
	I1108 09:18:43.926623       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice_mirroring"
	I1108 09:18:43.926658       1 node_lifecycle_controller.go:1221] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1108 09:18:43.926756       1 shared_informer.go:356] "Caches are synced" controller="crt configmap"
	I1108 09:18:43.926768       1 node_lifecycle_controller.go:873] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="newest-cni-620528"
	I1108 09:18:43.926822       1 node_lifecycle_controller.go:1025] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I1108 09:18:43.929974       1 shared_informer.go:356] "Caches are synced" controller="ephemeral"
	I1108 09:18:43.931188       1 shared_informer.go:356] "Caches are synced" controller="legacy-service-account-token-cleaner"
	I1108 09:18:43.931362       1 shared_informer.go:356] "Caches are synced" controller="resource quota"
	I1108 09:18:43.933665       1 shared_informer.go:356] "Caches are synced" controller="service-cidr-controller"
	I1108 09:18:43.936473       1 shared_informer.go:356] "Caches are synced" controller="endpoint_slice"
	I1108 09:18:43.937647       1 shared_informer.go:356] "Caches are synced" controller="ReplicationController"
	I1108 09:18:43.941866       1 shared_informer.go:356] "Caches are synced" controller="disruption"
	I1108 09:18:43.943044       1 shared_informer.go:356] "Caches are synced" controller="ClusterRoleAggregator"
	I1108 09:18:43.945322       1 shared_informer.go:356] "Caches are synced" controller="daemon sets"
	I1108 09:18:43.946554       1 shared_informer.go:356] "Caches are synced" controller="deployment"
	I1108 09:18:43.953941       1 shared_informer.go:356] "Caches are synced" controller="garbage collector"
	
	
	==> kube-proxy [cc9c2f6d17b30330b677e87b2ff6319041cfe684685eca11d47222d48f05bb91] <==
	I1108 09:18:41.828154       1 server_linux.go:53] "Using iptables proxy"
	I1108 09:18:41.883168       1 shared_informer.go:349] "Waiting for caches to sync" controller="node informer cache"
	I1108 09:18:41.983505       1 shared_informer.go:356] "Caches are synced" controller="node informer cache"
	I1108 09:18:41.983547       1 server.go:219] "Successfully retrieved NodeIPs" NodeIPs=["192.168.103.2"]
	E1108 09:18:41.983650       1 server.go:256] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1108 09:18:42.002615       1 server.go:265] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1108 09:18:42.002684       1 server_linux.go:132] "Using iptables Proxier"
	I1108 09:18:42.008027       1 proxier.go:242] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1108 09:18:42.008613       1 server.go:527] "Version info" version="v1.34.1"
	I1108 09:18:42.008654       1 server.go:529] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:18:42.011483       1 config.go:200] "Starting service config controller"
	I1108 09:18:42.011505       1 config.go:106] "Starting endpoint slice config controller"
	I1108 09:18:42.011507       1 shared_informer.go:349] "Waiting for caches to sync" controller="service config"
	I1108 09:18:42.011514       1 shared_informer.go:349] "Waiting for caches to sync" controller="endpoint slice config"
	I1108 09:18:42.011494       1 config.go:403] "Starting serviceCIDR config controller"
	I1108 09:18:42.011535       1 shared_informer.go:349] "Waiting for caches to sync" controller="serviceCIDR config"
	I1108 09:18:42.011641       1 config.go:309] "Starting node config controller"
	I1108 09:18:42.011660       1 shared_informer.go:349] "Waiting for caches to sync" controller="node config"
	I1108 09:18:42.011667       1 shared_informer.go:356] "Caches are synced" controller="node config"
	I1108 09:18:42.112146       1 shared_informer.go:356] "Caches are synced" controller="service config"
	I1108 09:18:42.112209       1 shared_informer.go:356] "Caches are synced" controller="serviceCIDR config"
	I1108 09:18:42.112242       1 shared_informer.go:356] "Caches are synced" controller="endpoint slice config"
	
	
	==> kube-scheduler [b07e735128b3d1aa2e0ea34181eff97bbb6d804be59f16d4a83a8aa6be615428] <==
	I1108 09:18:39.260154       1 serving.go:386] Generated self-signed cert in-memory
	W1108 09:18:40.529823       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1108 09:18:40.529861       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1108 09:18:40.529874       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1108 09:18:40.529883       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1108 09:18:40.553875       1 server.go:175] "Starting Kubernetes Scheduler" version="v1.34.1"
	I1108 09:18:40.553911       1 server.go:177] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1108 09:18:40.557632       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:18:40.557659       1 shared_informer.go:349] "Waiting for caches to sync" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1108 09:18:40.558085       1 secure_serving.go:211] Serving securely on 127.0.0.1:10259
	I1108 09:18:40.558498       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1108 09:18:40.658841       1 shared_informer.go:356] "Caches are synced" controller="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	
	
	==> kubelet <==
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.173925     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.174079     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.174234     669 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"newest-cni-620528\" not found" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.540716     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.654644     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-newest-cni-620528\" already exists" pod="kube-system/kube-scheduler-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.654692     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.660814     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"etcd-newest-cni-620528\" already exists" pod="kube-system/etcd-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.660851     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.666900     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-newest-cni-620528\" already exists" pod="kube-system/kube-apiserver-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.666933     669 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: E1108 09:18:40.674704     669 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-newest-cni-620528\" already exists" pod="kube-system/kube-controller-manager-newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737221     669 kubelet_node_status.go:124] "Node was previously registered" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737353     669 kubelet_node_status.go:78] "Successfully registered node" node="newest-cni-620528"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.737393     669 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="10.42.0.0/24"
	Nov 08 09:18:40 newest-cni-620528 kubelet[669]: I1108 09:18:40.738235     669 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.42.0.0/24"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.137357     669 apiserver.go:52] "Watching apiserver"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.240353     669 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340222     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-xtables-lock\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340269     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-cni-cfg\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340335     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef13acfb-b7b4-4aba-8145-f2ce94813f8e-lib-modules\") pod \"kube-proxy-xrf7w\" (UID: \"ef13acfb-b7b4-4aba-8145-f2ce94813f8e\") " pod="kube-system/kube-proxy-xrf7w"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340369     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-xtables-lock\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:41 newest-cni-620528 kubelet[669]: I1108 09:18:41.340391     669 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8240271d-256f-4fde-82b4-0c071eb000b6-lib-modules\") pod \"kindnet-fk7tk\" (UID: \"8240271d-256f-4fde-82b4-0c071eb000b6\") " pod="kube-system/kindnet-fk7tk"
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent...
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: kubelet.service: Deactivated successfully.
	Nov 08 09:18:42 newest-cni-620528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent.
	

                                                
                                                
-- /stdout --
helpers_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-620528 -n newest-cni-620528
helpers_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-620528 -n newest-cni-620528: exit status 2 (314.821515ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:262: status error: exit status 2 (may be ok)
helpers_test.go:269: (dbg) Run:  kubectl --context newest-cni-620528 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:280: non-running pods: coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq
helpers_test.go:282: ======> post-mortem[TestStartStop/group/newest-cni/serial/Pause]: describe non-running pods <======
helpers_test.go:285: (dbg) Run:  kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq
helpers_test.go:285: (dbg) Non-zero exit: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq: exit status 1 (58.863769ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "coredns-66bc5c9577-7fndk" not found
	Error from server (NotFound): pods "storage-provisioner" not found
	Error from server (NotFound): pods "dashboard-metrics-scraper-6ffb444bf9-l5b8k" not found
	Error from server (NotFound): pods "kubernetes-dashboard-855c9754f9-n9rrq" not found

                                                
                                                
** /stderr **
helpers_test.go:287: kubectl --context newest-cni-620528 describe pod coredns-66bc5c9577-7fndk storage-provisioner dashboard-metrics-scraper-6ffb444bf9-l5b8k kubernetes-dashboard-855c9754f9-n9rrq: exit status 1
--- FAIL: TestStartStop/group/newest-cni/serial/Pause (5.57s)

                                                
                                    

Test pass (262/327)

Order passed test Duration
3 TestDownloadOnly/v1.28.0/json-events 4.35
4 TestDownloadOnly/v1.28.0/preload-exists 0
8 TestDownloadOnly/v1.28.0/LogsDuration 0.07
9 TestDownloadOnly/v1.28.0/DeleteAll 0.23
10 TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.34.1/json-events 3.85
13 TestDownloadOnly/v1.34.1/preload-exists 0
17 TestDownloadOnly/v1.34.1/LogsDuration 0.07
18 TestDownloadOnly/v1.34.1/DeleteAll 0.23
19 TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds 0.15
20 TestDownloadOnlyKic 0.42
21 TestBinaryMirror 0.82
22 TestOffline 89.79
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.06
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.06
27 TestAddons/Setup 124.67
31 TestAddons/serial/GCPAuth/Namespaces 0.13
32 TestAddons/serial/GCPAuth/FakeCredentials 8.42
48 TestAddons/StoppedEnableDisable 16.66
49 TestCertOptions 32.51
50 TestCertExpiration 217.82
52 TestForceSystemdFlag 39.43
53 TestForceSystemdEnv 23.28
58 TestErrorSpam/setup 21.29
59 TestErrorSpam/start 0.66
60 TestErrorSpam/status 0.92
61 TestErrorSpam/pause 6.8
62 TestErrorSpam/unpause 5.87
63 TestErrorSpam/stop 18.09
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 37.43
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 5.99
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 2.6
75 TestFunctional/serial/CacheCmd/cache/add_local 0.75
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.06
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
79 TestFunctional/serial/CacheCmd/cache/cache_reload 1.49
80 TestFunctional/serial/CacheCmd/cache/delete 0.12
81 TestFunctional/serial/MinikubeKubectlCmd 0.12
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
83 TestFunctional/serial/ExtraConfig 45.45
84 TestFunctional/serial/ComponentHealth 0.07
85 TestFunctional/serial/LogsCmd 1.17
86 TestFunctional/serial/LogsFileCmd 1.21
87 TestFunctional/serial/InvalidService 3.79
89 TestFunctional/parallel/ConfigCmd 0.45
90 TestFunctional/parallel/DashboardCmd 5.2
91 TestFunctional/parallel/DryRun 0.38
92 TestFunctional/parallel/InternationalLanguage 0.2
93 TestFunctional/parallel/StatusCmd 0.92
98 TestFunctional/parallel/AddonsCmd 0.19
99 TestFunctional/parallel/PersistentVolumeClaim 22.36
101 TestFunctional/parallel/SSHCmd 0.63
102 TestFunctional/parallel/CpCmd 1.93
103 TestFunctional/parallel/MySQL 17.85
104 TestFunctional/parallel/FileSync 0.36
105 TestFunctional/parallel/CertSync 1.89
109 TestFunctional/parallel/NodeLabels 0.07
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.64
113 TestFunctional/parallel/License 0.29
115 TestFunctional/parallel/ImageCommands/ImageListTable 0.22
116 TestFunctional/parallel/ImageCommands/ImageListJson 0.54
117 TestFunctional/parallel/ImageCommands/ImageListYaml 0.22
118 TestFunctional/parallel/ImageCommands/ImageBuild 2.85
119 TestFunctional/parallel/ImageCommands/Setup 0.43
121 TestFunctional/parallel/Version/short 0.08
122 TestFunctional/parallel/Version/components 0.59
124 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.47
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 8.25
129 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
130 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.29
131 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
134 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
138 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.07
139 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
143 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
144 TestFunctional/parallel/ProfileCmd/profile_not_create 0.48
145 TestFunctional/parallel/ProfileCmd/profile_list 0.41
146 TestFunctional/parallel/ProfileCmd/profile_json_output 0.39
147 TestFunctional/parallel/MountCmd/any-port 6.73
148 TestFunctional/parallel/MountCmd/specific-port 2.02
149 TestFunctional/parallel/MountCmd/VerifyCleanup 1.6
150 TestFunctional/parallel/ServiceCmd/List 1.69
151 TestFunctional/parallel/ServiceCmd/JSONOutput 1.69
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
162 TestMultiControlPlane/serial/StartCluster 155.16
163 TestMultiControlPlane/serial/DeployApp 4.31
164 TestMultiControlPlane/serial/PingHostFromPods 1.03
165 TestMultiControlPlane/serial/AddWorkerNode 54.16
166 TestMultiControlPlane/serial/NodeLabels 0.06
167 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.87
168 TestMultiControlPlane/serial/CopyFile 16.9
169 TestMultiControlPlane/serial/StopSecondaryNode 19.75
170 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.7
171 TestMultiControlPlane/serial/RestartSecondaryNode 14.79
172 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.9
173 TestMultiControlPlane/serial/RestartClusterKeepsNodes 100.46
174 TestMultiControlPlane/serial/DeleteSecondaryNode 10.6
175 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.69
176 TestMultiControlPlane/serial/StopCluster 49.53
177 TestMultiControlPlane/serial/RestartCluster 55.37
178 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.69
179 TestMultiControlPlane/serial/AddSecondaryNode 43.64
180 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.88
185 TestJSONOutput/start/Command 37.59
186 TestJSONOutput/start/Audit 0
188 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
189 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/pause/Audit 0
194 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
195 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/unpause/Audit 0
200 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
201 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
203 TestJSONOutput/stop/Command 6.2
204 TestJSONOutput/stop/Audit 0
206 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
207 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
208 TestErrorJSONOutput 0.23
210 TestKicCustomNetwork/create_custom_network 26.63
211 TestKicCustomNetwork/use_default_bridge_network 23.51
212 TestKicExistingNetwork 22.95
213 TestKicCustomSubnet 23.83
214 TestKicStaticIP 27.58
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 48.42
219 TestMountStart/serial/StartWithMountFirst 8.29
220 TestMountStart/serial/VerifyMountFirst 0.27
221 TestMountStart/serial/StartWithMountSecond 4.88
222 TestMountStart/serial/VerifyMountSecond 0.26
223 TestMountStart/serial/DeleteFirst 1.71
224 TestMountStart/serial/VerifyMountPostDelete 0.26
225 TestMountStart/serial/Stop 1.24
226 TestMountStart/serial/RestartStopped 7.04
227 TestMountStart/serial/VerifyMountPostStop 0.27
230 TestMultiNode/serial/FreshStart2Nodes 63.14
231 TestMultiNode/serial/DeployApp2Nodes 3.3
232 TestMultiNode/serial/PingHostFrom2Pods 0.71
233 TestMultiNode/serial/AddNode 56.67
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.63
236 TestMultiNode/serial/CopyFile 9.5
237 TestMultiNode/serial/StopNode 2.24
238 TestMultiNode/serial/StartAfterStop 7.15
239 TestMultiNode/serial/RestartKeepsNodes 82.43
240 TestMultiNode/serial/DeleteNode 5.21
241 TestMultiNode/serial/StopMultiNode 28.55
242 TestMultiNode/serial/RestartMultiNode 28
243 TestMultiNode/serial/ValidateNameConflict 23.28
248 TestPreload 103.51
250 TestScheduledStopUnix 97.61
253 TestInsufficientStorage 9.66
254 TestRunningBinaryUpgrade 49.57
256 TestKubernetesUpgrade 305.2
257 TestMissingContainerUpgrade 103.66
259 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
260 TestNoKubernetes/serial/StartWithK8s 40.88
264 TestNoKubernetes/serial/StartWithStopK8s 18.35
269 TestNetworkPlugins/group/false 3.52
273 TestNoKubernetes/serial/Start 7.8
274 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
275 TestNoKubernetes/serial/ProfileList 2
276 TestNoKubernetes/serial/Stop 1.28
277 TestNoKubernetes/serial/StartNoArgs 11.33
278 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.31
279 TestStoppedBinaryUpgrade/Setup 0.38
280 TestStoppedBinaryUpgrade/Upgrade 39.47
289 TestPause/serial/Start 42.5
290 TestStoppedBinaryUpgrade/MinikubeLogs 0.97
291 TestNetworkPlugins/group/auto/Start 39.59
292 TestPause/serial/SecondStartNoReconfiguration 6.15
294 TestNetworkPlugins/group/kindnet/Start 37.96
295 TestNetworkPlugins/group/auto/KubeletFlags 0.44
296 TestNetworkPlugins/group/auto/NetCatPod 10.23
297 TestNetworkPlugins/group/auto/DNS 0.15
298 TestNetworkPlugins/group/auto/Localhost 0.09
299 TestNetworkPlugins/group/auto/HairPin 0.09
300 TestNetworkPlugins/group/calico/Start 51.27
301 TestNetworkPlugins/group/kindnet/ControllerPod 6
302 TestNetworkPlugins/group/kindnet/KubeletFlags 0.28
303 TestNetworkPlugins/group/kindnet/NetCatPod 8.2
304 TestNetworkPlugins/group/kindnet/DNS 0.12
305 TestNetworkPlugins/group/kindnet/Localhost 0.1
306 TestNetworkPlugins/group/kindnet/HairPin 0.1
307 TestNetworkPlugins/group/custom-flannel/Start 45.4
308 TestNetworkPlugins/group/calico/ControllerPod 6.01
309 TestNetworkPlugins/group/calico/KubeletFlags 0.35
310 TestNetworkPlugins/group/calico/NetCatPod 9.28
311 TestNetworkPlugins/group/enable-default-cni/Start 70.5
312 TestNetworkPlugins/group/calico/DNS 0.11
313 TestNetworkPlugins/group/calico/Localhost 0.12
314 TestNetworkPlugins/group/calico/HairPin 0.1
315 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.33
316 TestNetworkPlugins/group/custom-flannel/NetCatPod 8.19
317 TestNetworkPlugins/group/flannel/Start 44.87
318 TestNetworkPlugins/group/custom-flannel/DNS 0.11
319 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
320 TestNetworkPlugins/group/custom-flannel/HairPin 0.09
321 TestNetworkPlugins/group/bridge/Start 62.39
322 TestNetworkPlugins/group/flannel/ControllerPod 6.01
323 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.28
324 TestNetworkPlugins/group/enable-default-cni/NetCatPod 8.19
325 TestNetworkPlugins/group/flannel/KubeletFlags 0.29
326 TestNetworkPlugins/group/flannel/NetCatPod 8.17
327 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
328 TestNetworkPlugins/group/enable-default-cni/Localhost 0.11
329 TestNetworkPlugins/group/enable-default-cni/HairPin 0.12
330 TestNetworkPlugins/group/flannel/DNS 0.12
331 TestNetworkPlugins/group/flannel/Localhost 0.1
332 TestNetworkPlugins/group/flannel/HairPin 0.11
334 TestStartStop/group/old-k8s-version/serial/FirstStart 52.22
336 TestStartStop/group/no-preload/serial/FirstStart 57.61
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.4
338 TestNetworkPlugins/group/bridge/NetCatPod 10.25
340 TestStartStop/group/embed-certs/serial/FirstStart 45.14
341 TestNetworkPlugins/group/bridge/DNS 0.12
342 TestNetworkPlugins/group/bridge/Localhost 0.1
343 TestNetworkPlugins/group/bridge/HairPin 0.1
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 39.75
346 TestStartStop/group/old-k8s-version/serial/DeployApp 10.25
348 TestStartStop/group/embed-certs/serial/DeployApp 8.28
349 TestStartStop/group/old-k8s-version/serial/Stop 16.01
350 TestStartStop/group/no-preload/serial/DeployApp 7.26
353 TestStartStop/group/embed-certs/serial/Stop 16.31
354 TestStartStop/group/no-preload/serial/Stop 16.34
355 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
356 TestStartStop/group/old-k8s-version/serial/SecondStart 51.68
357 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.25
358 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
359 TestStartStop/group/embed-certs/serial/SecondStart 49.21
360 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.23
361 TestStartStop/group/no-preload/serial/SecondStart 47.37
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 17.34
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 50.41
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.07
368 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
369 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
370 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.23
372 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
373 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
374 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
376 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.24
379 TestStartStop/group/newest-cni/serial/FirstStart 28.79
380 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
381 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
382 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.24
384 TestStartStop/group/newest-cni/serial/DeployApp 0
386 TestStartStop/group/newest-cni/serial/Stop 2.41
387 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.19
388 TestStartStop/group/newest-cni/serial/SecondStart 10.45
389 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
390 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
391 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
x
+
TestDownloadOnly/v1.28.0/json-events (4.35s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-103718 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-103718 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.349108459s)
--- PASS: TestDownloadOnly/v1.28.0/json-events (4.35s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/preload-exists
I1108 08:28:49.900633    9369 preload.go:183] Checking if preload exists for k8s version v1.28.0 and runtime crio
I1108 08:28:49.900721    9369 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.28.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-103718
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-103718: exit status 85 (72.190849ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬──────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │ END TIME │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼──────────┤
	│ start   │ -o=json --download-only -p download-only-103718 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-103718 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │          │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴──────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:28:45
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:28:45.602854    9381 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:28:45.603083    9381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:45.603092    9381 out.go:374] Setting ErrFile to fd 2...
	I1108 08:28:45.603096    9381 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:45.603304    9381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	W1108 08:28:45.603426    9381 root.go:314] Error reading config file at /home/jenkins/minikube-integration/21866-5860/.minikube/config/config.json: open /home/jenkins/minikube-integration/21866-5860/.minikube/config/config.json: no such file or directory
	I1108 08:28:45.603870    9381 out.go:368] Setting JSON to true
	I1108 08:28:45.604784    9381 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":677,"bootTime":1762589849,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:28:45.604871    9381 start.go:143] virtualization: kvm guest
	I1108 08:28:45.607024    9381 out.go:99] [download-only-103718] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:28:45.607201    9381 notify.go:221] Checking for updates...
	W1108 08:28:45.607203    9381 preload.go:349] Failed to list preload files: open /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball: no such file or directory
	I1108 08:28:45.608662    9381 out.go:171] MINIKUBE_LOCATION=21866
	I1108 08:28:45.610590    9381 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:28:45.611861    9381 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:28:45.613082    9381 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:28:45.614336    9381 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 08:28:45.616959    9381 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 08:28:45.617240    9381 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:28:45.640368    9381 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:28:45.640481    9381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:46.039389    9381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-08 08:28:46.027311238 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:46.039544    9381 docker.go:319] overlay module found
	I1108 08:28:46.041356    9381 out.go:99] Using the docker driver based on user configuration
	I1108 08:28:46.041393    9381 start.go:309] selected driver: docker
	I1108 08:28:46.041401    9381 start.go:930] validating driver "docker" against <nil>
	I1108 08:28:46.041513    9381 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:46.099988    9381 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:64 SystemTime:2025-11-08 08:28:46.089368742 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:46.100144    9381 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:28:46.100694    9381 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1108 08:28:46.100858    9381 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 08:28:46.102734    9381 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-103718 host does not exist
	  To start a cluster, run: "minikube start -p download-only-103718"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-103718
--- PASS: TestDownloadOnly/v1.28.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/json-events
aaa_download_only_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-713440 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-713440 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (3.850324274s)
--- PASS: TestDownloadOnly/v1.34.1/json-events (3.85s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/preload-exists
I1108 08:28:54.198482    9369 preload.go:183] Checking if preload exists for k8s version v1.34.1 and runtime crio
I1108 08:28:54.198534    9369 preload.go:198] Found local preload: /home/jenkins/minikube-integration/21866-5860/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.34.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.34.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/LogsDuration
aaa_download_only_test.go:183: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-713440
aaa_download_only_test.go:183: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-713440: exit status 85 (73.098783ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	┌─────────┬───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┬──────────────────────┬─────────┬─────────┬─────────────────────┬─────────────────────┐
	│ COMMAND │                                                                                   ARGS                                                                                    │       PROFILE        │  USER   │ VERSION │     START TIME      │      END TIME       │
	├─────────┼───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┼──────────────────────┼─────────┼─────────┼─────────────────────┼─────────────────────┤
	│ start   │ -o=json --download-only -p download-only-103718 --force --alsologtostderr --kubernetes-version=v1.28.0 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-103718 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	│ delete  │ --all                                                                                                                                                                     │ minikube             │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ delete  │ -p download-only-103718                                                                                                                                                   │ download-only-103718 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │ 08 Nov 25 08:28 UTC │
	│ start   │ -o=json --download-only -p download-only-713440 --force --alsologtostderr --kubernetes-version=v1.34.1 --container-runtime=crio --driver=docker  --container-runtime=crio │ download-only-713440 │ jenkins │ v1.37.0 │ 08 Nov 25 08:28 UTC │                     │
	└─────────┴───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────┴──────────────────────┴─────────┴─────────┴─────────────────────┴─────────────────────┘
	
	
	==> Last Start <==
	Log file created at: 2025/11/08 08:28:50
	Running on machine: ubuntu-20-agent-4
	Binary: Built with gc go1.24.6 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1108 08:28:50.398478    9739 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:28:50.398583    9739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:50.398591    9739 out.go:374] Setting ErrFile to fd 2...
	I1108 08:28:50.398596    9739 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:28:50.398795    9739 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:28:50.399238    9739 out.go:368] Setting JSON to true
	I1108 08:28:50.400010    9739 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":681,"bootTime":1762589849,"procs":174,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:28:50.400101    9739 start.go:143] virtualization: kvm guest
	I1108 08:28:50.402120    9739 out.go:99] [download-only-713440] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:28:50.402329    9739 notify.go:221] Checking for updates...
	I1108 08:28:50.403545    9739 out.go:171] MINIKUBE_LOCATION=21866
	I1108 08:28:50.405106    9739 out.go:171] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:28:50.406329    9739 out.go:171] KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:28:50.410913    9739 out.go:171] MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:28:50.412261    9739 out.go:171] MINIKUBE_BIN=out/minikube-linux-amd64
	W1108 08:28:50.414576    9739 out.go:336] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1108 08:28:50.414792    9739 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:28:50.439115    9739 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:28:50.439183    9739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:50.496229    9739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-08 08:28:50.485060977 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:50.496359    9739 docker.go:319] overlay module found
	I1108 08:28:50.498551    9739 out.go:99] Using the docker driver based on user configuration
	I1108 08:28:50.498589    9739 start.go:309] selected driver: docker
	I1108 08:28:50.498595    9739 start.go:930] validating driver "docker" against <nil>
	I1108 08:28:50.498693    9739 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:28:50.557451    9739 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:false NGoroutines:52 SystemTime:2025-11-08 08:28:50.548531891 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:28:50.557620    9739 start_flags.go:327] no existing cluster config was found, will generate one from the flags 
	I1108 08:28:50.558098    9739 start_flags.go:410] Using suggested 8000MB memory alloc based on sys=32093MB, container=32093MB
	I1108 08:28:50.558260    9739 start_flags.go:974] Wait components to verify : map[apiserver:true system_pods:true]
	I1108 08:28:50.560155    9739 out.go:171] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-713440 host does not exist
	  To start a cluster, run: "minikube start -p download-only-713440"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:184: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.34.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAll
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.34.1/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:207: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-713440
--- PASS: TestDownloadOnly/v1.34.1/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.42s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:231: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-800960 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-800960" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-800960
--- PASS: TestDownloadOnlyKic (0.42s)

                                                
                                    
x
+
TestBinaryMirror (0.82s)

                                                
                                                
=== RUN   TestBinaryMirror
I1108 08:28:55.347761    9369 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.34.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:309: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-174375 --alsologtostderr --binary-mirror http://127.0.0.1:33529 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-174375" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-174375
--- PASS: TestBinaryMirror (0.82s)

                                                
                                    
x
+
TestOffline (89.79s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-798164 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-798164 --alsologtostderr -v=1 --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (1m27.097391122s)
helpers_test.go:175: Cleaning up "offline-crio-798164" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-798164
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-798164: (2.689323145s)
--- PASS: TestOffline (89.79s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-758852
addons_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-758852: exit status 85 (62.280031ms)

                                                
                                                
-- stdout --
	* Profile "addons-758852" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758852"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-758852
addons_test.go:1011: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-758852: exit status 85 (62.524402ms)

                                                
                                                
-- stdout --
	* Profile "addons-758852" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-758852"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.06s)

                                                
                                    
x
+
TestAddons/Setup (124.67s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:108: (dbg) Run:  out/minikube-linux-amd64 start -p addons-758852 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:108: (dbg) Done: out/minikube-linux-amd64 start -p addons-758852 --wait=true --memory=4096 --alsologtostderr --addons=registry --addons=registry-creds --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m4.666245226s)
--- PASS: TestAddons/Setup (124.67s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:630: (dbg) Run:  kubectl --context addons-758852 create ns new-namespace
addons_test.go:644: (dbg) Run:  kubectl --context addons-758852 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:675: (dbg) Run:  kubectl --context addons-758852 create -f testdata/busybox.yaml
addons_test.go:682: (dbg) Run:  kubectl --context addons-758852 create sa gcp-auth-test
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [850742cc-4864-4985-838b-99ba86e8a88f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [850742cc-4864-4985-838b-99ba86e8a88f] Running
addons_test.go:688: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 8.003603736s
addons_test.go:694: (dbg) Run:  kubectl --context addons-758852 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:706: (dbg) Run:  kubectl --context addons-758852 describe sa gcp-auth-test
addons_test.go:744: (dbg) Run:  kubectl --context addons-758852 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (8.42s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (16.66s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-758852
addons_test.go:172: (dbg) Done: out/minikube-linux-amd64 stop -p addons-758852: (16.384791544s)
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-758852
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-758852
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-758852
--- PASS: TestAddons/StoppedEnableDisable (16.66s)

                                                
                                    
x
+
TestCertOptions (32.51s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-763535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-763535 --memory=3072 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (29.221406946s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-763535 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-763535 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-763535 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-763535" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-763535
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-763535: (2.473759719s)
--- PASS: TestCertOptions (32.51s)

                                                
                                    
x
+
TestCertExpiration (217.82s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-640168 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-640168 --memory=3072 --cert-expiration=3m --driver=docker  --container-runtime=crio: (28.76704806s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-640168 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-640168 --memory=3072 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (6.173950644s)
helpers_test.go:175: Cleaning up "cert-expiration-640168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-640168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-640168: (2.877118229s)
--- PASS: TestCertExpiration (217.82s)

                                                
                                    
x
+
TestForceSystemdFlag (39.43s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-867114 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-867114 --memory=3072 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (36.417244735s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-867114 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-867114" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-867114
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-867114: (2.678636101s)
--- PASS: TestForceSystemdFlag (39.43s)

                                                
                                    
x
+
TestForceSystemdEnv (23.28s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-004778 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-004778 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (20.785723532s)
helpers_test.go:175: Cleaning up "force-systemd-env-004778" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-004778
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-004778: (2.490658979s)
--- PASS: TestForceSystemdEnv (23.28s)

                                                
                                    
x
+
TestErrorSpam/setup (21.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-700506 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-700506 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-700506 -n=1 --memory=3072 --wait=false --log_dir=/tmp/nospam-700506 --driver=docker  --container-runtime=crio: (21.289155145s)
--- PASS: TestErrorSpam/setup (21.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.66s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:206: Cleaning up 1 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 start --dry-run
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 start --dry-run
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 start --dry-run
--- PASS: TestErrorSpam/start (0.66s)

                                                
                                    
x
+
TestErrorSpam/status (0.92s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 status
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 status
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 status
--- PASS: TestErrorSpam/status (0.92s)

                                                
                                    
x
+
TestErrorSpam/pause (6.8s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause: exit status 80 (2.288647888s)

                                                
                                                
-- stdout --
	* Pausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:27Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause: exit status 80 (2.317800237s)

                                                
                                                
-- stdout --
	* Pausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:29Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause: exit status 80 (2.19139148s)

                                                
                                                
-- stdout --
	* Pausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_PAUSE: Pause: list running: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:32Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 pause" failed: exit status 80
--- PASS: TestErrorSpam/pause (6.80s)

                                                
                                    
x
+
TestErrorSpam/unpause (5.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause: exit status 80 (1.600528711s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:33Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause" failed: exit status 80
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause
error_spam_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause: exit status 80 (2.010252482s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:35Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:151: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause" failed: exit status 80
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause
error_spam_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause: exit status 80 (2.263446087s)

                                                
                                                
-- stdout --
	* Unpausing node nospam-700506 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_UNPAUSE: Pause: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2025-11-08T08:34:37Z" level=error msg="open /run/runc: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_delete_7f6b85125f52d8b6f2676a081a2b9f4eb5a7d9b1_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
error_spam_test.go:174: "out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 unpause" failed: exit status 80
--- PASS: TestErrorSpam/unpause (5.87s)

                                                
                                    
x
+
TestErrorSpam/stop (18.09s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:206: Cleaning up 0 logfile(s) ...
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 stop
error_spam_test.go:149: (dbg) Done: out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 stop: (17.879347288s)
error_spam_test.go:149: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 stop
error_spam_test.go:172: (dbg) Run:  out/minikube-linux-amd64 -p nospam-700506 --log_dir /tmp/nospam-700506 stop
--- PASS: TestErrorSpam/stop (18.09s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1860: local sync path: /home/jenkins/minikube-integration/21866-5860/.minikube/files/etc/test/nested/copy/9369/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.43s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2239: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2239: (dbg) Done: out/minikube-linux-amd64 start -p functional-096647 --memory=4096 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.428286338s)
--- PASS: TestFunctional/serial/StartWithProxy (37.43s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (5.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1108 08:35:38.531813    9369 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
functional_test.go:674: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --alsologtostderr -v=8
functional_test.go:674: (dbg) Done: out/minikube-linux-amd64 start -p functional-096647 --alsologtostderr -v=8: (5.986318319s)
functional_test.go:678: soft start took 5.987622198s for "functional-096647" cluster.
I1108 08:35:44.518642    9369 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/SoftStart (5.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:696: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:711: (dbg) Run:  kubectl --context functional-096647 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.6s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache add registry.k8s.io/pause:3.1
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache add registry.k8s.io/pause:3.3
functional_test.go:1064: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.60s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1092: (dbg) Run:  docker build -t minikube-local-cache-test:functional-096647 /tmp/TestFunctionalserialCacheCmdcacheadd_local4016118558/001
functional_test.go:1104: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache add minikube-local-cache-test:functional-096647
functional_test.go:1109: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache delete minikube-local-cache-test:functional-096647
functional_test.go:1098: (dbg) Run:  docker rmi minikube-local-cache-test:functional-096647
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (0.75s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1117: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1125: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1139: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1162: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1168: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (272.345863ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1173: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cache reload
functional_test.go:1178: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.49s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1187: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:731: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 kubectl -- --context functional-096647 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:756: (dbg) Run:  out/kubectl --context functional-096647 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:772: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E1108 08:36:01.472058    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.478461    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.489804    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.511165    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.552624    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.634084    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:01.795613    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:02.117310    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:02.759377    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:04.040976    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:06.603885    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:11.725411    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:36:21.967521    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:772: (dbg) Done: out/minikube-linux-amd64 start -p functional-096647 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (45.448563256s)
functional_test.go:776: restart took 45.448682534s for "functional-096647" cluster.
I1108 08:36:35.686475    9369 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestFunctional/serial/ExtraConfig (45.45s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:825: (dbg) Run:  kubectl --context functional-096647 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:840: etcd phase: Running
functional_test.go:850: etcd status: Ready
functional_test.go:840: kube-apiserver phase: Running
functional_test.go:850: kube-apiserver status: Ready
functional_test.go:840: kube-controller-manager phase: Running
functional_test.go:850: kube-controller-manager status: Ready
functional_test.go:840: kube-scheduler phase: Running
functional_test.go:850: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.17s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1251: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 logs
functional_test.go:1251: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 logs: (1.173756556s)
--- PASS: TestFunctional/serial/LogsCmd (1.17s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1265: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 logs --file /tmp/TestFunctionalserialLogsFileCmd3880989426/001/logs.txt
functional_test.go:1265: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 logs --file /tmp/TestFunctionalserialLogsFileCmd3880989426/001/logs.txt: (1.207252003s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.21s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (3.79s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2326: (dbg) Run:  kubectl --context functional-096647 apply -f testdata/invalidsvc.yaml
functional_test.go:2340: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-096647
functional_test.go:2340: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-096647: exit status 115 (336.121412ms)

                                                
                                                
-- stdout --
	┌───────────┬─────────────┬─────────────┬───────────────────────────┐
	│ NAMESPACE │    NAME     │ TARGET PORT │            URL            │
	├───────────┼─────────────┼─────────────┼───────────────────────────┤
	│ default   │ invalid-svc │ 80          │ http://192.168.49.2:32548 │
	└───────────┴─────────────┴─────────────┴───────────────────────────┘
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2332: (dbg) Run:  kubectl --context functional-096647 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (3.79s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 config get cpus: exit status 14 (80.76737ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config set cpus 2
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config get cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config unset cpus
functional_test.go:1214: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 config get cpus
functional_test.go:1214: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 config get cpus: exit status 14 (80.281383ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:920: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096647 --alsologtostderr -v=1]
functional_test.go:925: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-096647 --alsologtostderr -v=1] ...
helpers_test.go:525: unable to kill pid 47888: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.20s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:989: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:989: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096647 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (160.142237ms)

                                                
                                                
-- stdout --
	* [functional-096647] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:36:53.621971   45113 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:36:53.622218   45113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:53.622236   45113 out.go:374] Setting ErrFile to fd 2...
	I1108 08:36:53.622244   45113 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:53.622474   45113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:36:53.622921   45113 out.go:368] Setting JSON to false
	I1108 08:36:53.623854   45113 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1165,"bootTime":1762589849,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:36:53.623909   45113 start.go:143] virtualization: kvm guest
	I1108 08:36:53.625750   45113 out.go:179] * [functional-096647] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 08:36:53.627106   45113 notify.go:221] Checking for updates...
	I1108 08:36:53.627128   45113 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:36:53.628523   45113 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:36:53.629753   45113 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:36:53.630860   45113 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:36:53.632680   45113 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:36:53.633880   45113 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:36:53.635269   45113 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:36:53.635754   45113 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:36:53.659086   45113 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:36:53.659204   45113 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:36:53.718252   45113 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-08 08:36:53.707276757 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:36:53.718423   45113 docker.go:319] overlay module found
	I1108 08:36:53.720473   45113 out.go:179] * Using the docker driver based on existing profile
	I1108 08:36:53.721776   45113 start.go:309] selected driver: docker
	I1108 08:36:53.721796   45113 start.go:930] validating driver "docker" against &{Name:functional-096647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-096647 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:36:53.721887   45113 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:36:53.723837   45113 out.go:203] 
	W1108 08:36:53.725257   45113 out.go:285] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1108 08:36:53.726353   45113 out.go:203] 

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1035: (dbg) Run:  out/minikube-linux-amd64 start -p functional-096647 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1035: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-096647 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (201.767786ms)

                                                
                                                
-- stdout --
	* [functional-096647] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:36:52.165947   44492 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:36:52.166084   44492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:52.166101   44492 out.go:374] Setting ErrFile to fd 2...
	I1108 08:36:52.166107   44492 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:36:52.166546   44492 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:36:52.167089   44492 out.go:368] Setting JSON to false
	I1108 08:36:52.168214   44492 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":1163,"bootTime":1762589849,"procs":224,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 08:36:52.168327   44492 start.go:143] virtualization: kvm guest
	I1108 08:36:52.171498   44492 out.go:179] * [functional-096647] minikube v1.37.0 sur Ubuntu 22.04 (kvm/amd64)
	I1108 08:36:52.173198   44492 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 08:36:52.173202   44492 notify.go:221] Checking for updates...
	I1108 08:36:52.174513   44492 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 08:36:52.175827   44492 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 08:36:52.176929   44492 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 08:36:52.178336   44492 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 08:36:52.179596   44492 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 08:36:52.181451   44492 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:36:52.182135   44492 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 08:36:52.210315   44492 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 08:36:52.210443   44492 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:36:52.281334   44492 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:37 OomKillDisable:false NGoroutines:56 SystemTime:2025-11-08 08:36:52.268768847 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:36:52.281452   44492 docker.go:319] overlay module found
	I1108 08:36:52.283868   44492 out.go:179] * Utilisation du pilote docker basé sur le profil existant
	I1108 08:36:52.285078   44492 start.go:309] selected driver: docker
	I1108 08:36:52.285097   44492 start.go:930] validating driver "docker" against &{Name:functional-096647 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.48-1761985721-21837@sha256:a50b37e97dfdea51156e079ca6b45818a801b3d41bbe13d141f35d2e1af6c7d1 Memory:4096 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.34.1 ClusterName:functional-096647 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.34.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s MountString: Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false DisableCoreDNSLog:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1108 08:36:52.285217   44492 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 08:36:52.287233   44492 out.go:203] 
	W1108 08:36:52.290730   44492 out.go:285] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1108 08:36:52.292052   44492 out.go:203] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:869: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 status
functional_test.go:875: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:887: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.92s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1695: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 addons list
functional_test.go:1707: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (22.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:352: "storage-provisioner" [4b7dc412-7ee6-4f7f-b297-2c4b5916f761] Running
functional_test_pvc_test.go:50: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003670884s
functional_test_pvc_test.go:55: (dbg) Run:  kubectl --context functional-096647 get storageclass -o=json
functional_test_pvc_test.go:75: (dbg) Run:  kubectl --context functional-096647 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:82: (dbg) Run:  kubectl --context functional-096647 get pvc myclaim -o=json
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-096647 apply -f testdata/storage-provisioner/pod.yaml
I1108 08:36:50.209335    9369 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [8e93da64-dd13-43d3-980c-22d7f4320ea6] Pending
helpers_test.go:352: "sp-pod" [8e93da64-dd13-43d3-980c-22d7f4320ea6] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:352: "sp-pod" [8e93da64-dd13-43d3-980c-22d7f4320ea6] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003731841s
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-096647 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:112: (dbg) Run:  kubectl --context functional-096647 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:131: (dbg) Run:  kubectl --context functional-096647 apply -f testdata/storage-provisioner/pod.yaml
I1108 08:37:00.132220    9369 detect.go:223] nested VM detected
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 6m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:352: "sp-pod" [490556a6-66ec-4f63-9b6e-3523b93ed893] Pending
helpers_test.go:352: "sp-pod" [490556a6-66ec-4f63-9b6e-3523b93ed893] Running
functional_test_pvc_test.go:140: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004133875s
functional_test_pvc_test.go:120: (dbg) Run:  kubectl --context functional-096647 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (22.36s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1730: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "echo hello"
functional_test.go:1747: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh -n functional-096647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cp functional-096647:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd50659121/001/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh -n functional-096647 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh -n functional-096647 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (17.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1798: (dbg) Run:  kubectl --context functional-096647 replace --force -f testdata/mysql.yaml
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:352: "mysql-5bb876957f-bl48c" [62be41a4-cd0e-42d4-95cf-081f3d391b6e] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
2025/11/08 08:37:09 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
helpers_test.go:352: "mysql-5bb876957f-bl48c" [62be41a4-cd0e-42d4-95cf-081f3d391b6e] Running
functional_test.go:1804: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.002886656s
functional_test.go:1812: (dbg) Run:  kubectl --context functional-096647 exec mysql-5bb876957f-bl48c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-096647 exec mysql-5bb876957f-bl48c -- mysql -ppassword -e "show databases;": exit status 1 (84.306794ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1108 08:37:21.535293    9369 retry.go:31] will retry after 1.00560838s: exit status 1
functional_test.go:1812: (dbg) Run:  kubectl --context functional-096647 exec mysql-5bb876957f-bl48c -- mysql -ppassword -e "show databases;"
functional_test.go:1812: (dbg) Non-zero exit: kubectl --context functional-096647 exec mysql-5bb876957f-bl48c -- mysql -ppassword -e "show databases;": exit status 1 (85.201955ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I1108 08:37:22.626557    9369 retry.go:31] will retry after 1.436007956s: exit status 1
E1108 08:37:23.411162    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
functional_test.go:1812: (dbg) Run:  kubectl --context functional-096647 exec mysql-5bb876957f-bl48c -- mysql -ppassword -e "show databases;"
E1108 08:38:45.332514    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:41:01.464647    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:41:29.174509    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:46:01.464459    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/MySQL (17.85s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1934: Checking for existence of /etc/test/nested/copy/9369/hosts within VM
functional_test.go:1936: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /etc/test/nested/copy/9369/hosts"
functional_test.go:1941: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1977: Checking for existence of /etc/ssl/certs/9369.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /etc/ssl/certs/9369.pem"
functional_test.go:1977: Checking for existence of /usr/share/ca-certificates/9369.pem within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /usr/share/ca-certificates/9369.pem"
functional_test.go:1977: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1978: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/93692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /etc/ssl/certs/93692.pem"
functional_test.go:2004: Checking for existence of /usr/share/ca-certificates/93692.pem within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /usr/share/ca-certificates/93692.pem"
functional_test.go:2004: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2005: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:234: (dbg) Run:  kubectl --context functional-096647 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo systemctl is-active docker"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "sudo systemctl is-active docker": exit status 1 (311.1948ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2032: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo systemctl is-active containerd"
functional_test.go:2032: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "sudo systemctl is-active containerd": exit status 1 (325.167291ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.64s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2293: (dbg) Run:  out/minikube-linux-amd64 license
E1108 08:36:42.449569    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
--- PASS: TestFunctional/parallel/License (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls --format table --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096647 image ls --format table --alsologtostderr:
┌─────────────────────────────────────────┬────────────────────┬───────────────┬────────┐
│                  IMAGE                  │        TAG         │   IMAGE ID    │  SIZE  │
├─────────────────────────────────────────┼────────────────────┼───────────────┼────────┤
│ docker.io/library/mysql                 │ 5.7                │ 5107333e08a87 │ 520MB  │
│ gcr.io/k8s-minikube/storage-provisioner │ v5                 │ 6e38f40d628db │ 31.5MB │
│ localhost/my-image                      │ functional-096647  │ d351445fb3c40 │ 1.47MB │
│ registry.k8s.io/kube-proxy              │ v1.34.1            │ fc25172553d79 │ 73.1MB │
│ registry.k8s.io/coredns/coredns         │ v1.12.1            │ 52546a367cc9e │ 76.1MB │
│ registry.k8s.io/etcd                    │ 3.6.4-0            │ 5f1f5298c888d │ 196MB  │
│ registry.k8s.io/kube-apiserver          │ v1.34.1            │ c3994bc696102 │ 89MB   │
│ registry.k8s.io/pause                   │ latest             │ 350b164e7ae1d │ 247kB  │
│ docker.io/kindest/kindnetd              │ v20250512-df8de77b │ 409467f978b4a │ 109MB  │
│ gcr.io/k8s-minikube/busybox             │ 1.28.4-glibc       │ 56cc512116c8f │ 4.63MB │
│ gcr.io/k8s-minikube/busybox             │ latest             │ beae173ccac6a │ 1.46MB │
│ registry.k8s.io/kube-controller-manager │ v1.34.1            │ c80c8dbafe7dd │ 76MB   │
│ registry.k8s.io/kube-scheduler          │ v1.34.1            │ 7dd6aaa1717ab │ 53.8MB │
│ registry.k8s.io/pause                   │ 3.1                │ da86e6ba6ca19 │ 747kB  │
│ registry.k8s.io/pause                   │ 3.3                │ 0184c1613d929 │ 686kB  │
│ docker.io/library/nginx                 │ alpine             │ d4918ca78576a │ 54.3MB │
│ docker.io/library/nginx                 │ latest             │ d261fd19cb632 │ 155MB  │
│ registry.k8s.io/pause                   │ 3.10.1             │ cd073f4c5f6a8 │ 742kB  │
└─────────────────────────────────────────┴────────────────────┴───────────────┴────────┘
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096647 image ls --format table --alsologtostderr:
I1108 08:37:17.171204   49691 out.go:360] Setting OutFile to fd 1 ...
I1108 08:37:17.171459   49691 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:17.171469   49691 out.go:374] Setting ErrFile to fd 2...
I1108 08:37:17.171473   49691 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:17.171668   49691 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
I1108 08:37:17.172255   49691 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:17.172382   49691 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:17.172771   49691 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
I1108 08:37:17.190911   49691 ssh_runner.go:195] Run: systemctl --version
I1108 08:37:17.190984   49691 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
I1108 08:37:17.209669   49691 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
I1108 08:37:17.301838   49691 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls --format json --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096647 image ls --format json --alsologtostderr:
[{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"0184c1613d92931126feb4c548e5da1101
5513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d","repoDigests":["docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad","docker.io/library/nginx@sha256:bd1578eec7
75d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b"],"repoTags":["docker.io/library/nginx:latest"],"size":"155489797"},{"id":"7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813","repoDigests":["registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31","registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500"],"repoTags":["registry.k8s.io/kube-scheduler:v1.34.1"],"size":"53844823"},{"id":"409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c","repoDigests":["docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a","docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11"],"repoTags":["docker.io/kindest/kindnetd:v20250512-df8de77b"],"size":"109379124"},{"id":"c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97","repoDigests":["registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b7574
5df78fe8903bade1fa0f874f9167dad964","registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902"],"repoTags":["registry.k8s.io/kube-apiserver:v1.34.1"],"size":"89046001"},{"id":"c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89","registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.34.1"],"size":"76004181"},{"id":"cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f","repoDigests":["registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c","registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41"],"repoTags":["registry.k8s.io/pause:3.10.1"],"size":"742092"},{"id":"d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cab
ed9","repoDigests":["docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7","docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14"],"repoTags":["docker.io/library/nginx:alpine"],"size":"54252718"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969","repoDigests":["registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998","registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c"],"repoTags":["registry.k8s.io/coredns/coredns:v1.12.1"],"size":"76103547"},
{"id":"5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115","repoDigests":["registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f","registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19"],"repoTags":["registry.k8s.io/etcd:3.6.4-0"],"size":"195976448"},{"id":"fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7","repoDigests":["registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a","registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a"],"repoTags":["registry.k8s.io/kube-proxy:v1.34.1"],"size":"73138073"}]
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096647 image ls --format json --alsologtostderr:
I1108 08:37:13.577200   48890 out.go:360] Setting OutFile to fd 1 ...
I1108 08:37:13.577501   48890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:13.577514   48890 out.go:374] Setting ErrFile to fd 2...
I1108 08:37:13.577521   48890 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:13.577798   48890 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
I1108 08:37:13.578517   48890 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:13.578655   48890 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:13.579223   48890 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
I1108 08:37:13.601853   48890 ssh_runner.go:195] Run: systemctl --version
I1108 08:37:13.601904   48890 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
I1108 08:37:13.623698   48890 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
I1108 08:37:13.726789   48890 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:276: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls --format yaml --alsologtostderr
functional_test.go:281: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096647 image ls --format yaml --alsologtostderr:
- id: 409467f978b4a30fe717012736557d637f66371452c3b279c02b943b367a141c
repoDigests:
- docker.io/kindest/kindnetd@sha256:07a4b3fe0077a0ae606cc0a200fc25a28fa64dcc30b8d311b461089969449f9a
- docker.io/kindest/kindnetd@sha256:7a9c9fa59dd517cdc2c82eef1e51392524dd285e9cf7cb5a851c49f294d6cd11
repoTags:
- docker.io/kindest/kindnetd:v20250512-df8de77b
size: "109379124"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 2c8e690a06c17ba9eee5cde4f9f607e9b72561937039f84cdb9644ae10bd5fce
repoDigests:
- docker.io/library/49a4302e31697c46a4d774a14400980f83387fc4b4ff50b7bf14c2610fbb0645-tmp@sha256:0ad916dad9a90afe5db5cfb6dd340875464a4548d334c72b54da6975c84105b5
repoTags: []
size: "1466132"
- id: c3994bc6961024917ec0aeee02e62828108c21a52d87648e30f3080d9cbadc97
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:264da1e0ab552e24b2eb034a1b75745df78fe8903bade1fa0f874f9167dad964
- registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902
repoTags:
- registry.k8s.io/kube-apiserver:v1.34.1
size: "89046001"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: d4918ca78576a537caa7b0c043051c8efc1796de33fee8724ee0fff4a1cabed9
repoDigests:
- docker.io/library/nginx@sha256:667473807103639a0aca5b49534a216d2b64f0fb868aaa801f023da0cdd781c7
- docker.io/library/nginx@sha256:b3c656d55d7ad751196f21b7fd2e8d4da9cb430e32f646adcf92441b72f82b14
repoTags:
- docker.io/library/nginx:alpine
size: "54252718"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: d351445fb3c40329fba63f6237742f99ad93706fa5af89f8712bb892991b84a8
repoDigests:
- localhost/my-image@sha256:d37e6c52fa8173cee732bc176e1480ab1dc2d77326ceb027f377097aa9cfe112
repoTags:
- localhost/my-image:functional-096647
size: "1468744"
- id: 52546a367cc9e0d924aa3b190596a9167fa6e53245023b5b5baf0f07e5443969
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:4f7a57135719628cf2070c5e3cbde64b013e90d4c560c5ecbf14004181f91998
- registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c
repoTags:
- registry.k8s.io/coredns/coredns:v1.12.1
size: "76103547"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: d261fd19cb63238535ab80d4e1be1d9e7f6c8b5a28a820188968dd3e6f06072d
repoDigests:
- docker.io/library/nginx@sha256:1beed3ca46acebe9d3fb62e9067f03d05d5bfa97a00f30938a0a3580563272ad
- docker.io/library/nginx@sha256:bd1578eec775d0b28fd7f664b182b7e1fb75f1dd09f92d865dababe8525dfe8b
repoTags:
- docker.io/library/nginx:latest
size: "155489797"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee
- gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
repoTags:
- gcr.io/k8s-minikube/busybox:latest
size: "1462480"
- id: fc25172553d79197ecd840ec8dba1fba68330079355e974b04c1a441e6a4a0b7
repoDigests:
- registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a
- registry.k8s.io/kube-proxy@sha256:9e876d245c76f0e3529c82bb103b60a59c4e190317827f977ab696cc4f43020a
repoTags:
- registry.k8s.io/kube-proxy:v1.34.1
size: "73138073"
- id: cd073f4c5f6a8e9dc6f3125ba00cf60819cae95c1ec84a1f146ee4a9cf9e803f
repoDigests:
- registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c
- registry.k8s.io/pause@sha256:e5b941ef8f71de54dc3a13398226c269ba217d06650a21bd3afcf9d890cf1f41
repoTags:
- registry.k8s.io/pause:3.10.1
size: "742092"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 5f1f5298c888daa46c4409ff4cefe5ca9d16e479419f94cdb5f5d5563dac0115
repoDigests:
- registry.k8s.io/etcd@sha256:71170330936954286be203a7737459f2838dd71cc79f8ffaac91548a9e079b8f
- registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19
repoTags:
- registry.k8s.io/etcd:3.6.4-0
size: "195976448"
- id: c80c8dbafe7dd71fc21527912a6dd20ccd1b71f3e561a5c28337388d0619538f
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89
- registry.k8s.io/kube-controller-manager@sha256:a6fe41965f1693c8a73ebe75e215d0b7c0902732c66c6692b0dbcfb0f077c992
repoTags:
- registry.k8s.io/kube-controller-manager:v1.34.1
size: "76004181"
- id: 7dd6aaa1717ab7eaae4578503e4c4d9965fcf5a249e8155fe16379ee9b6cb813
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:47306e2178d9766fe3fe9eada02fa995f9f29dcbf518832293dfbe16964e2d31
- registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500
repoTags:
- registry.k8s.io/kube-scheduler:v1.34.1
size: "53844823"

                                                
                                                
functional_test.go:284: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096647 image ls --format yaml --alsologtostderr:
I1108 08:37:16.952179   49636 out.go:360] Setting OutFile to fd 1 ...
I1108 08:37:16.952296   49636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:16.952310   49636 out.go:374] Setting ErrFile to fd 2...
I1108 08:37:16.952314   49636 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:16.952510   49636 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
I1108 08:37:16.953023   49636 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:16.953130   49636 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:16.953509   49636 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
I1108 08:37:16.972009   49636 ssh_runner.go:195] Run: systemctl --version
I1108 08:37:16.972061   49636 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
I1108 08:37:16.989466   49636 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
I1108 08:37:17.082241   49636 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh pgrep buildkitd
functional_test.go:323: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh pgrep buildkitd: exit status 1 (273.539951ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:330: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image build -t localhost/my-image:functional-096647 testdata/build --alsologtostderr
functional_test.go:330: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 image build -t localhost/my-image:functional-096647 testdata/build --alsologtostderr: (2.360898528s)
functional_test.go:335: (dbg) Stdout: out/minikube-linux-amd64 -p functional-096647 image build -t localhost/my-image:functional-096647 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 2c8e690a06c
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-096647
--> d351445fb3c
Successfully tagged localhost/my-image:functional-096647
d351445fb3c40329fba63f6237742f99ad93706fa5af89f8712bb892991b84a8
functional_test.go:338: (dbg) Stderr: out/minikube-linux-amd64 -p functional-096647 image build -t localhost/my-image:functional-096647 testdata/build --alsologtostderr:
I1108 08:37:14.375099   49113 out.go:360] Setting OutFile to fd 1 ...
I1108 08:37:14.375413   49113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:14.375423   49113 out.go:374] Setting ErrFile to fd 2...
I1108 08:37:14.375427   49113 out.go:408] TERM=,COLORTERM=, which probably does not support color
I1108 08:37:14.375645   49113 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
I1108 08:37:14.376211   49113 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:14.376859   49113 config.go:182] Loaded profile config "functional-096647": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
I1108 08:37:14.377253   49113 cli_runner.go:164] Run: docker container inspect functional-096647 --format={{.State.Status}}
I1108 08:37:14.395402   49113 ssh_runner.go:195] Run: systemctl --version
I1108 08:37:14.395455   49113 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-096647
I1108 08:37:14.413684   49113 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/functional-096647/id_rsa Username:docker}
I1108 08:37:14.505873   49113 build_images.go:162] Building image from path: /tmp/build.1403501429.tar
I1108 08:37:14.505946   49113 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1108 08:37:14.513919   49113 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1403501429.tar
I1108 08:37:14.517617   49113 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1403501429.tar: stat -c "%s %y" /var/lib/minikube/build/build.1403501429.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1403501429.tar': No such file or directory
I1108 08:37:14.517644   49113 ssh_runner.go:362] scp /tmp/build.1403501429.tar --> /var/lib/minikube/build/build.1403501429.tar (3072 bytes)
I1108 08:37:14.534963   49113 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1403501429
I1108 08:37:14.542488   49113 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1403501429 -xf /var/lib/minikube/build/build.1403501429.tar
I1108 08:37:14.550299   49113 crio.go:315] Building image: /var/lib/minikube/build/build.1403501429
I1108 08:37:14.550369   49113 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-096647 /var/lib/minikube/build/build.1403501429 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I1108 08:37:16.656768   49113 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-096647 /var/lib/minikube/build/build.1403501429 --cgroup-manager=cgroupfs: (2.106360633s)
I1108 08:37:16.656846   49113 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1403501429
I1108 08:37:16.664891   49113 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1403501429.tar
I1108 08:37:16.672244   49113 build_images.go:218] Built localhost/my-image:functional-096647 from /tmp/build.1403501429.tar
I1108 08:37:16.672302   49113 build_images.go:134] succeeded building to: functional-096647
I1108 08:37:16.672309   49113 build_images.go:135] failed building to: 
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:357: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:362: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-096647
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2261: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 version --short
--- PASS: TestFunctional/parallel/Version/short (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2275: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr] ...
helpers_test.go:525: unable to kill pid 42422: os: process already finished
helpers_test.go:519: unable to terminate pid 41983: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-096647 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:352: "nginx-svc" [5918647b-5351-4d1d-b1db-a58cd8d140e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:352: "nginx-svc" [5918647b-5351-4d1d-b1db-a58cd8d140e9] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 8.004067103s
I1108 08:36:51.919641    9369 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (8.25s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2124: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:407: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image rm kicbase/echo-server:functional-096647 --alsologtostderr
functional_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-096647 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.201.218 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-096647 tunnel --alsologtostderr] ...
functional_test_tunnel_test.go:437: failed to stop process: signal: terminated
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1285: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1290: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1325: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1330: Took "347.509379ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1339: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1344: Took "59.00207ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1376: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1381: Took "330.0647ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1389: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1394: Took "59.115162ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdany-port1087077118/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1762591013948369037" to /tmp/TestFunctionalparallelMountCmdany-port1087077118/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1762591013948369037" to /tmp/TestFunctionalparallelMountCmdany-port1087077118/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1762591013948369037" to /tmp/TestFunctionalparallelMountCmdany-port1087077118/001/test-1762591013948369037
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (285.068563ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:36:54.233746    9369 retry.go:31] will retry after 463.224981ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Nov  8 08:36 created-by-test
-rw-r--r-- 1 docker docker 24 Nov  8 08:36 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Nov  8 08:36 test-1762591013948369037
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh cat /mount-9p/test-1762591013948369037
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-096647 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:352: "busybox-mount" [cc9e6f10-c18b-4749-b194-768c70532a18] Pending
helpers_test.go:352: "busybox-mount" [cc9e6f10-c18b-4749-b194-768c70532a18] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:352: "busybox-mount" [cc9e6f10-c18b-4749-b194-768c70532a18] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:352: "busybox-mount" [cc9e6f10-c18b-4749-b194-768c70532a18] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003527366s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-096647 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdany-port1087077118/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.73s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdspecific-port1953486497/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.914618ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:37:00.978054    9369 retry.go:31] will retry after 713.412968ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdspecific-port1953486497/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "sudo umount -f /mount-9p": exit status 1 (263.837064ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-096647 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdspecific-port1953486497/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T" /mount1: exit status 1 (337.966259ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1108 08:37:03.043509    9369 retry.go:31] will retry after 401.215861ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-096647 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-096647 /tmp/TestFunctionalparallelMountCmdVerifyCleanup747888990/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:507: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.60s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1469: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 service list
functional_test.go:1469: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 service list: (1.694673075s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1499: (dbg) Run:  out/minikube-linux-amd64 -p functional-096647 service list -o json
functional_test.go:1499: (dbg) Done: out/minikube-linux-amd64 -p functional-096647 service list -o json: (1.692043037s)
functional_test.go:1504: Took "1.692137515s" to run "out/minikube-linux-amd64 -p functional-096647 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (1.69s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:205: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-096647
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:213: (dbg) Run:  docker rmi -f localhost/my-image:functional-096647
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:221: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-096647
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (155.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 start --ha --memory 3072 --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (2m34.451376382s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/StartCluster (155.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.31s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 kubectl -- rollout status deployment/busybox: (2.350696619s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-mxn5w -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-nvqmw -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-vvxw4 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-mxn5w -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-nvqmw -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-vvxw4 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-mxn5w -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-nvqmw -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-vvxw4 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.31s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-mxn5w -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-mxn5w -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-nvqmw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-nvqmw -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-vvxw4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 kubectl -- exec busybox-7b57f96db7-vvxw4 -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (54.16s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node add --alsologtostderr -v 5
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 node add --alsologtostderr -v 5: (53.292096102s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (54.16s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-420865 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (16.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --output json --alsologtostderr -v 5
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp testdata/cp-test.txt ha-420865:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile569550231/001/cp-test_ha-420865.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865:/home/docker/cp-test.txt ha-420865-m02:/home/docker/cp-test_ha-420865_ha-420865-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test_ha-420865_ha-420865-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865:/home/docker/cp-test.txt ha-420865-m03:/home/docker/cp-test_ha-420865_ha-420865-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test_ha-420865_ha-420865-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865:/home/docker/cp-test.txt ha-420865-m04:/home/docker/cp-test_ha-420865_ha-420865-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test_ha-420865_ha-420865-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp testdata/cp-test.txt ha-420865-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile569550231/001/cp-test_ha-420865-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m02:/home/docker/cp-test.txt ha-420865:/home/docker/cp-test_ha-420865-m02_ha-420865.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test_ha-420865-m02_ha-420865.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m02:/home/docker/cp-test.txt ha-420865-m03:/home/docker/cp-test_ha-420865-m02_ha-420865-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test_ha-420865-m02_ha-420865-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m02:/home/docker/cp-test.txt ha-420865-m04:/home/docker/cp-test_ha-420865-m02_ha-420865-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test_ha-420865-m02_ha-420865-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp testdata/cp-test.txt ha-420865-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile569550231/001/cp-test_ha-420865-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m03:/home/docker/cp-test.txt ha-420865:/home/docker/cp-test_ha-420865-m03_ha-420865.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test_ha-420865-m03_ha-420865.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m03:/home/docker/cp-test.txt ha-420865-m02:/home/docker/cp-test_ha-420865-m03_ha-420865-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test_ha-420865-m03_ha-420865-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m03:/home/docker/cp-test.txt ha-420865-m04:/home/docker/cp-test_ha-420865-m03_ha-420865-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test_ha-420865-m03_ha-420865-m04.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp testdata/cp-test.txt ha-420865-m04:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile569550231/001/cp-test_ha-420865-m04.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m04:/home/docker/cp-test.txt ha-420865:/home/docker/cp-test_ha-420865-m04_ha-420865.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865 "sudo cat /home/docker/cp-test_ha-420865-m04_ha-420865.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m04:/home/docker/cp-test.txt ha-420865-m02:/home/docker/cp-test_ha-420865-m04_ha-420865-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m02 "sudo cat /home/docker/cp-test_ha-420865-m04_ha-420865-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 cp ha-420865-m04:/home/docker/cp-test.txt ha-420865-m03:/home/docker/cp-test_ha-420865-m04_ha-420865-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 ssh -n ha-420865-m03 "sudo cat /home/docker/cp-test_ha-420865-m04_ha-420865-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (16.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (19.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node stop m02 --alsologtostderr -v 5
E1108 08:51:01.464666    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:365: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 node stop m02 --alsologtostderr -v 5: (19.057195979s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5: exit status 7 (696.290179ms)

                                                
                                                
-- stdout --
	ha-420865
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-420865-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420865-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-420865-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:51:10.508162   73852 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:51:10.508265   73852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:51:10.508273   73852 out.go:374] Setting ErrFile to fd 2...
	I1108 08:51:10.508278   73852 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:51:10.508508   73852 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:51:10.508660   73852 out.go:368] Setting JSON to false
	I1108 08:51:10.508691   73852 mustload.go:66] Loading cluster: ha-420865
	I1108 08:51:10.508793   73852 notify.go:221] Checking for updates...
	I1108 08:51:10.509049   73852 config.go:182] Loaded profile config "ha-420865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:51:10.509063   73852 status.go:174] checking status of ha-420865 ...
	I1108 08:51:10.509554   73852 cli_runner.go:164] Run: docker container inspect ha-420865 --format={{.State.Status}}
	I1108 08:51:10.530137   73852 status.go:371] ha-420865 host status = "Running" (err=<nil>)
	I1108 08:51:10.530164   73852 host.go:66] Checking if "ha-420865" exists ...
	I1108 08:51:10.530479   73852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420865
	I1108 08:51:10.549785   73852 host.go:66] Checking if "ha-420865" exists ...
	I1108 08:51:10.550177   73852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:51:10.550229   73852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420865
	I1108 08:51:10.570340   73852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/ha-420865/id_rsa Username:docker}
	I1108 08:51:10.661865   73852 ssh_runner.go:195] Run: systemctl --version
	I1108 08:51:10.668492   73852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:51:10.681077   73852 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 08:51:10.741531   73852 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:63 OomKillDisable:false NGoroutines:75 SystemTime:2025-11-08 08:51:10.730935013 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 08:51:10.742149   73852 kubeconfig.go:125] found "ha-420865" server: "https://192.168.49.254:8443"
	I1108 08:51:10.742181   73852 api_server.go:166] Checking apiserver status ...
	I1108 08:51:10.742222   73852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:51:10.754073   73852 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup
	W1108 08:51:10.762203   73852 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1230/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 08:51:10.762253   73852 ssh_runner.go:195] Run: ls
	I1108 08:51:10.765874   73852 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 08:51:10.770878   73852 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 08:51:10.770902   73852 status.go:463] ha-420865 apiserver status = Running (err=<nil>)
	I1108 08:51:10.770914   73852 status.go:176] ha-420865 status: &{Name:ha-420865 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:51:10.770932   73852 status.go:174] checking status of ha-420865-m02 ...
	I1108 08:51:10.771184   73852 cli_runner.go:164] Run: docker container inspect ha-420865-m02 --format={{.State.Status}}
	I1108 08:51:10.790733   73852 status.go:371] ha-420865-m02 host status = "Stopped" (err=<nil>)
	I1108 08:51:10.790776   73852 status.go:384] host is not running, skipping remaining checks
	I1108 08:51:10.790784   73852 status.go:176] ha-420865-m02 status: &{Name:ha-420865-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:51:10.790815   73852 status.go:174] checking status of ha-420865-m03 ...
	I1108 08:51:10.791246   73852 cli_runner.go:164] Run: docker container inspect ha-420865-m03 --format={{.State.Status}}
	I1108 08:51:10.810247   73852 status.go:371] ha-420865-m03 host status = "Running" (err=<nil>)
	I1108 08:51:10.810270   73852 host.go:66] Checking if "ha-420865-m03" exists ...
	I1108 08:51:10.810553   73852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420865-m03
	I1108 08:51:10.831496   73852 host.go:66] Checking if "ha-420865-m03" exists ...
	I1108 08:51:10.831789   73852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:51:10.831826   73852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420865-m03
	I1108 08:51:10.849155   73852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/ha-420865-m03/id_rsa Username:docker}
	I1108 08:51:10.939703   73852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:51:10.952470   73852 kubeconfig.go:125] found "ha-420865" server: "https://192.168.49.254:8443"
	I1108 08:51:10.952493   73852 api_server.go:166] Checking apiserver status ...
	I1108 08:51:10.952523   73852 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 08:51:10.963911   73852 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup
	W1108 08:51:10.972071   73852 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1149/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 08:51:10.972122   73852 ssh_runner.go:195] Run: ls
	I1108 08:51:10.975611   73852 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1108 08:51:10.979804   73852 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1108 08:51:10.979828   73852 status.go:463] ha-420865-m03 apiserver status = Running (err=<nil>)
	I1108 08:51:10.979837   73852 status.go:176] ha-420865-m03 status: &{Name:ha-420865-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:51:10.979856   73852 status.go:174] checking status of ha-420865-m04 ...
	I1108 08:51:10.980160   73852 cli_runner.go:164] Run: docker container inspect ha-420865-m04 --format={{.State.Status}}
	I1108 08:51:10.999775   73852 status.go:371] ha-420865-m04 host status = "Running" (err=<nil>)
	I1108 08:51:10.999797   73852 host.go:66] Checking if "ha-420865-m04" exists ...
	I1108 08:51:11.000057   73852 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-420865-m04
	I1108 08:51:11.018185   73852 host.go:66] Checking if "ha-420865-m04" exists ...
	I1108 08:51:11.018493   73852 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 08:51:11.018554   73852 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-420865-m04
	I1108 08:51:11.037041   73852 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/ha-420865-m04/id_rsa Username:docker}
	I1108 08:51:11.129391   73852 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 08:51:11.142039   73852 status.go:176] ha-420865-m04 status: &{Name:ha-420865-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (19.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (14.79s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node start m02 --alsologtostderr -v 5
ha_test.go:422: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 node start m02 --alsologtostderr -v 5: (13.866873724s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (14.79s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.9s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.90s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.46s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node list --alsologtostderr -v 5
ha_test.go:464: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 stop --alsologtostderr -v 5
E1108 08:51:43.667136    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.674146    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.685544    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.706912    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.748336    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.830319    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:43.992085    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:44.313764    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:44.955812    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:46.237114    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:48.799528    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:51:53.921318    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:52:04.163605    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:464: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 stop --alsologtostderr -v 5: (44.696469569s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 start --wait true --alsologtostderr -v 5
E1108 08:52:24.535981    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:52:24.645693    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 08:53:05.607000    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:469: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 start --wait true --alsologtostderr -v 5: (55.632083535s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node list --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (100.46s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (10.6s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node delete m03 --alsologtostderr -v 5
ha_test.go:489: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 node delete m03 --alsologtostderr -v 5: (9.741368853s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (10.60s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (49.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 stop --alsologtostderr -v 5
ha_test.go:533: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 stop --alsologtostderr -v 5: (49.415183142s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5: exit status 7 (117.167596ms)

                                                
                                                
-- stdout --
	ha-420865
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420865-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-420865-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 08:54:08.756859   87907 out.go:360] Setting OutFile to fd 1 ...
	I1108 08:54:08.757134   87907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:54:08.757145   87907 out.go:374] Setting ErrFile to fd 2...
	I1108 08:54:08.757149   87907 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 08:54:08.757422   87907 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 08:54:08.757583   87907 out.go:368] Setting JSON to false
	I1108 08:54:08.757615   87907 mustload.go:66] Loading cluster: ha-420865
	I1108 08:54:08.757673   87907 notify.go:221] Checking for updates...
	I1108 08:54:08.758155   87907 config.go:182] Loaded profile config "ha-420865": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 08:54:08.758183   87907 status.go:174] checking status of ha-420865 ...
	I1108 08:54:08.758779   87907 cli_runner.go:164] Run: docker container inspect ha-420865 --format={{.State.Status}}
	I1108 08:54:08.778174   87907 status.go:371] ha-420865 host status = "Stopped" (err=<nil>)
	I1108 08:54:08.778195   87907 status.go:384] host is not running, skipping remaining checks
	I1108 08:54:08.778201   87907 status.go:176] ha-420865 status: &{Name:ha-420865 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:54:08.778238   87907 status.go:174] checking status of ha-420865-m02 ...
	I1108 08:54:08.778513   87907 cli_runner.go:164] Run: docker container inspect ha-420865-m02 --format={{.State.Status}}
	I1108 08:54:08.796923   87907 status.go:371] ha-420865-m02 host status = "Stopped" (err=<nil>)
	I1108 08:54:08.796950   87907 status.go:384] host is not running, skipping remaining checks
	I1108 08:54:08.796957   87907 status.go:176] ha-420865-m02 status: &{Name:ha-420865-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 08:54:08.797002   87907 status.go:174] checking status of ha-420865-m04 ...
	I1108 08:54:08.797234   87907 cli_runner.go:164] Run: docker container inspect ha-420865-m04 --format={{.State.Status}}
	I1108 08:54:08.815616   87907 status.go:371] ha-420865-m04 host status = "Stopped" (err=<nil>)
	I1108 08:54:08.815645   87907 status.go:384] host is not running, skipping remaining checks
	I1108 08:54:08.815653   87907 status.go:176] ha-420865-m04 status: &{Name:ha-420865-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (49.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (55.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio
E1108 08:54:27.528640    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
ha_test.go:562: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 start --wait true --alsologtostderr -v 5 --driver=docker  --container-runtime=crio: (54.573624362s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (55.37s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (43.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 node add --control-plane --alsologtostderr -v 5
ha_test.go:607: (dbg) Done: out/minikube-linux-amd64 -p ha-420865 node add --control-plane --alsologtostderr -v 5: (42.764296398s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-amd64 -p ha-420865 status --alsologtostderr -v 5
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (43.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.88s)

                                                
                                    
x
+
TestJSONOutput/start/Command (37.59s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-882453 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio
E1108 08:56:01.467185    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-882453 --output=json --user=testUser --memory=3072 --wait=true --driver=docker  --container-runtime=crio: (37.589643725s)
--- PASS: TestJSONOutput/start/Command (37.59s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (6.2s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-882453 --output=json --user=testUser
E1108 08:56:43.668065    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-882453 --output=json --user=testUser: (6.201441262s)
--- PASS: TestJSONOutput/stop/Command (6.20s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.23s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-381304 --memory=3072 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-381304 --memory=3072 --output=json --wait=true --driver=fail: exit status 56 (76.473352ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"d664435f-ad62-481d-b47d-414819b492b9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-381304] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"203d47d1-43bc-48b1-805a-7f5cfdab8a19","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21866"}}
	{"specversion":"1.0","id":"f9d324a2-2cac-49c7-8ffb-ca9f23da5015","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"af1d728c-002c-4d3c-9da6-ec60c2695d53","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig"}}
	{"specversion":"1.0","id":"92a4d9f5-1b96-419d-8c25-e52bf7608da5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube"}}
	{"specversion":"1.0","id":"ef1bd848-545d-44c0-a090-f620d28b3e0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"b8f5dd79-fb59-42a7-bc74-6a1cbce4d8df","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"8dd21c8e-50fe-4dc2-9657-a767dba75d71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-381304" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-381304
--- PASS: TestErrorJSONOutput (0.23s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-697731 --network=
E1108 08:57:11.370176    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-697731 --network=: (24.4421778s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-697731" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-697731
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-697731: (2.173314038s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (23.51s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-201804 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-201804 --network=bridge: (21.473029527s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-201804" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-201804
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-201804: (2.018465603s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (23.51s)

                                                
                                    
x
+
TestKicExistingNetwork (22.95s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1108 08:57:40.209074    9369 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1108 08:57:40.226403    9369 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1108 08:57:40.226476    9369 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1108 08:57:40.226494    9369 cli_runner.go:164] Run: docker network inspect existing-network
W1108 08:57:40.243314    9369 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1108 08:57:40.243340    9369 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1108 08:57:40.243364    9369 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1108 08:57:40.243496    9369 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1108 08:57:40.260411    9369 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-b3f2c64ee2dd IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:a2:bb:40:03:c1:35} reservation:<nil>}
I1108 08:57:40.260869    9369 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc002053110}
I1108 08:57:40.260901    9369 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1108 08:57:40.260941    9369 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1108 08:57:40.316676    9369 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-697352 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-697352 --network=existing-network: (20.770765822s)
helpers_test.go:175: Cleaning up "existing-network-697352" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-697352
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-697352: (2.033259433s)
I1108 08:58:03.138548    9369 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (22.95s)

                                                
                                    
x
+
TestKicCustomSubnet (23.83s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-771440 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-771440 --subnet=192.168.60.0/24: (21.651193591s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-771440 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-771440" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-771440
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-771440: (2.158435747s)
--- PASS: TestKicCustomSubnet (23.83s)

                                                
                                    
x
+
TestKicStaticIP (27.58s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-230298 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-230298 --static-ip=192.168.200.200: (25.271979539s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-230298 ip
helpers_test.go:175: Cleaning up "static-ip-230298" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-230298
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-230298: (2.168747858s)
--- PASS: TestKicStaticIP (27.58s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:70: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (48.42s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-963611 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-963611 --driver=docker  --container-runtime=crio: (21.030379487s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-965847 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-965847 --driver=docker  --container-runtime=crio: (21.412753873s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-963611
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-965847
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-965847" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-965847
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-965847: (2.389566445s)
helpers_test.go:175: Cleaning up "first-963611" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-963611
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-963611: (2.364397351s)
--- PASS: TestMinikubeProfile (48.42s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (8.29s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-487294 --memory=3072 --mount-string /tmp/TestMountStartserial2577635145/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-487294 --memory=3072 --mount-string /tmp/TestMountStartserial2577635145/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (7.284931639s)
--- PASS: TestMountStart/serial/StartWithMountFirst (8.29s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-487294 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.27s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (4.88s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:118: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-501782 --memory=3072 --mount-string /tmp/TestMountStartserial2577635145/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:118: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-501782 --memory=3072 --mount-string /tmp/TestMountStartserial2577635145/001:/minikube-host --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (3.883351146s)
--- PASS: TestMountStart/serial/StartWithMountSecond (4.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-487294 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-487294 --alsologtostderr -v=5: (1.711978319s)
--- PASS: TestMountStart/serial/DeleteFirst (1.71s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.26s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.24s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:196: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-501782
mount_start_test.go:196: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-501782: (1.244893254s)
--- PASS: TestMountStart/serial/Stop (1.24s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.04s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:207: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-501782
mount_start_test.go:207: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-501782: (6.039730998s)
--- PASS: TestMountStart/serial/RestartStopped (7.04s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:134: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-501782 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (63.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-298790 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
E1108 09:01:01.463898    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-298790 --wait=true --memory=3072 --nodes=2 -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (1m2.674731391s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (63.14s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (3.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-298790 -- rollout status deployment/busybox: (1.890098561s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-9pdrg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-pgwjc -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-9pdrg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-pgwjc -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-9pdrg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-pgwjc -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (3.30s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-9pdrg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-9pdrg -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-pgwjc -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-298790 -- exec busybox-7b57f96db7-pgwjc -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (56.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-298790 -v=5 --alsologtostderr
E1108 09:01:43.667148    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-298790 -v=5 --alsologtostderr: (56.055448639s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (56.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-298790 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.63s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (9.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --output json --alsologtostderr
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp testdata/cp-test.txt multinode-298790:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1689884810/001/cp-test_multinode-298790.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790:/home/docker/cp-test.txt multinode-298790-m02:/home/docker/cp-test_multinode-298790_multinode-298790-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test_multinode-298790_multinode-298790-m02.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790:/home/docker/cp-test.txt multinode-298790-m03:/home/docker/cp-test_multinode-298790_multinode-298790-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test_multinode-298790_multinode-298790-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp testdata/cp-test.txt multinode-298790-m02:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1689884810/001/cp-test_multinode-298790-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m02:/home/docker/cp-test.txt multinode-298790:/home/docker/cp-test_multinode-298790-m02_multinode-298790.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test_multinode-298790-m02_multinode-298790.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m02:/home/docker/cp-test.txt multinode-298790-m03:/home/docker/cp-test_multinode-298790-m02_multinode-298790-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test_multinode-298790-m02_multinode-298790-m03.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp testdata/cp-test.txt multinode-298790-m03:/home/docker/cp-test.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1689884810/001/cp-test_multinode-298790-m03.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m03:/home/docker/cp-test.txt multinode-298790:/home/docker/cp-test_multinode-298790-m03_multinode-298790.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790 "sudo cat /home/docker/cp-test_multinode-298790-m03_multinode-298790.txt"
helpers_test.go:573: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 cp multinode-298790-m03:/home/docker/cp-test.txt multinode-298790-m02:/home/docker/cp-test_multinode-298790-m03_multinode-298790-m02.txt
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:551: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 ssh -n multinode-298790-m02 "sudo cat /home/docker/cp-test_multinode-298790-m03_multinode-298790-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (9.50s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-298790 node stop m03: (1.26920226s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-298790 status: exit status 7 (487.954752ms)

                                                
                                                
-- stdout --
	multinode-298790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-298790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-298790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr: exit status 7 (477.749543ms)

                                                
                                                
-- stdout --
	multinode-298790
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-298790-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-298790-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:02:24.988009  147735 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:02:24.988101  147735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:02:24.988116  147735 out.go:374] Setting ErrFile to fd 2...
	I1108 09:02:24.988119  147735 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:02:24.988275  147735 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:02:24.988422  147735 out.go:368] Setting JSON to false
	I1108 09:02:24.988449  147735 mustload.go:66] Loading cluster: multinode-298790
	I1108 09:02:24.988538  147735 notify.go:221] Checking for updates...
	I1108 09:02:24.988769  147735 config.go:182] Loaded profile config "multinode-298790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:02:24.988782  147735 status.go:174] checking status of multinode-298790 ...
	I1108 09:02:24.989196  147735 cli_runner.go:164] Run: docker container inspect multinode-298790 --format={{.State.Status}}
	I1108 09:02:25.008673  147735 status.go:371] multinode-298790 host status = "Running" (err=<nil>)
	I1108 09:02:25.008699  147735 host.go:66] Checking if "multinode-298790" exists ...
	I1108 09:02:25.008981  147735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-298790
	I1108 09:02:25.026339  147735 host.go:66] Checking if "multinode-298790" exists ...
	I1108 09:02:25.026737  147735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:02:25.026804  147735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-298790
	I1108 09:02:25.043572  147735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32904 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/multinode-298790/id_rsa Username:docker}
	I1108 09:02:25.133544  147735 ssh_runner.go:195] Run: systemctl --version
	I1108 09:02:25.139622  147735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:02:25.151108  147735 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:02:25.208248  147735 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:51 OomKillDisable:false NGoroutines:65 SystemTime:2025-11-08 09:02:25.198560051 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:02:25.208852  147735 kubeconfig.go:125] found "multinode-298790" server: "https://192.168.67.2:8443"
	I1108 09:02:25.208887  147735 api_server.go:166] Checking apiserver status ...
	I1108 09:02:25.208926  147735 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1108 09:02:25.220237  147735 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup
	W1108 09:02:25.228433  147735 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1244/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I1108 09:02:25.228478  147735 ssh_runner.go:195] Run: ls
	I1108 09:02:25.232007  147735 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1108 09:02:25.236057  147735 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1108 09:02:25.236084  147735 status.go:463] multinode-298790 apiserver status = Running (err=<nil>)
	I1108 09:02:25.236094  147735 status.go:176] multinode-298790 status: &{Name:multinode-298790 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:02:25.236117  147735 status.go:174] checking status of multinode-298790-m02 ...
	I1108 09:02:25.236375  147735 cli_runner.go:164] Run: docker container inspect multinode-298790-m02 --format={{.State.Status}}
	I1108 09:02:25.254721  147735 status.go:371] multinode-298790-m02 host status = "Running" (err=<nil>)
	I1108 09:02:25.254745  147735 host.go:66] Checking if "multinode-298790-m02" exists ...
	I1108 09:02:25.254993  147735 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-298790-m02
	I1108 09:02:25.272185  147735 host.go:66] Checking if "multinode-298790-m02" exists ...
	I1108 09:02:25.272512  147735 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1108 09:02:25.272557  147735 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-298790-m02
	I1108 09:02:25.289639  147735 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32909 SSHKeyPath:/home/jenkins/minikube-integration/21866-5860/.minikube/machines/multinode-298790-m02/id_rsa Username:docker}
	I1108 09:02:25.379343  147735 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1108 09:02:25.391040  147735 status.go:176] multinode-298790-m02 status: &{Name:multinode-298790-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:02:25.391082  147735 status.go:174] checking status of multinode-298790-m03 ...
	I1108 09:02:25.391357  147735 cli_runner.go:164] Run: docker container inspect multinode-298790-m03 --format={{.State.Status}}
	I1108 09:02:25.409033  147735 status.go:371] multinode-298790-m03 host status = "Stopped" (err=<nil>)
	I1108 09:02:25.409053  147735 status.go:384] host is not running, skipping remaining checks
	I1108 09:02:25.409059  147735 status.go:176] multinode-298790-m03 status: &{Name:multinode-298790-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.24s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 node start m03 -v=5 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-298790 node start m03 -v=5 --alsologtostderr: (6.466846459s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status -v=5 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (7.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (82.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-298790
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-298790
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-298790: (31.360991588s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-298790 --wait=true -v=5 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-298790 --wait=true -v=5 --alsologtostderr: (50.954348937s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-298790
--- PASS: TestMultiNode/serial/RestartKeepsNodes (82.43s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-298790 node delete m03: (4.625659589s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-298790 stop: (28.356303224s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-298790 status: exit status 7 (99.777582ms)

                                                
                                                
-- stdout --
	multinode-298790
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-298790-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr: exit status 7 (96.318756ms)

                                                
                                                
-- stdout --
	multinode-298790
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-298790-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:04:28.714051  157501 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:04:28.714168  157501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:04:28.714180  157501 out.go:374] Setting ErrFile to fd 2...
	I1108 09:04:28.714186  157501 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:04:28.714420  157501 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:04:28.714601  157501 out.go:368] Setting JSON to false
	I1108 09:04:28.714628  157501 mustload.go:66] Loading cluster: multinode-298790
	I1108 09:04:28.714684  157501 notify.go:221] Checking for updates...
	I1108 09:04:28.715035  157501 config.go:182] Loaded profile config "multinode-298790": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:04:28.715050  157501 status.go:174] checking status of multinode-298790 ...
	I1108 09:04:28.715473  157501 cli_runner.go:164] Run: docker container inspect multinode-298790 --format={{.State.Status}}
	I1108 09:04:28.734728  157501 status.go:371] multinode-298790 host status = "Stopped" (err=<nil>)
	I1108 09:04:28.734752  157501 status.go:384] host is not running, skipping remaining checks
	I1108 09:04:28.734761  157501 status.go:176] multinode-298790 status: &{Name:multinode-298790 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1108 09:04:28.734813  157501 status.go:174] checking status of multinode-298790-m02 ...
	I1108 09:04:28.735142  157501 cli_runner.go:164] Run: docker container inspect multinode-298790-m02 --format={{.State.Status}}
	I1108 09:04:28.752798  157501 status.go:371] multinode-298790-m02 host status = "Stopped" (err=<nil>)
	I1108 09:04:28.752819  157501 status.go:384] host is not running, skipping remaining checks
	I1108 09:04:28.752824  157501 status.go:176] multinode-298790-m02 status: &{Name:multinode-298790-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (28.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (28s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-298790 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-298790 --wait=true -v=5 --alsologtostderr --driver=docker  --container-runtime=crio: (27.42261275s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-298790 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (28.00s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (23.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-298790
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-298790-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-298790-m02 --driver=docker  --container-runtime=crio: exit status 14 (81.836683ms)

                                                
                                                
-- stdout --
	* [multinode-298790-m02] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-298790-m02' is duplicated with machine name 'multinode-298790-m02' in profile 'multinode-298790'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-298790-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-298790-m03 --driver=docker  --container-runtime=crio: (20.467064428s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-298790
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-298790: exit status 80 (281.893568ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-298790 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-298790-m03 already exists in multinode-298790-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-298790-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-298790-m03: (2.388455523s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (23.28s)

                                                
                                    
x
+
TestPreload (103.51s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:43: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-893168 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0
E1108 09:06:01.465519    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:43: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-893168 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.32.0: (45.008783997s)
preload_test.go:51: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-893168 image pull gcr.io/k8s-minikube/busybox
preload_test.go:51: (dbg) Done: out/minikube-linux-amd64 -p test-preload-893168 image pull gcr.io/k8s-minikube/busybox: (1.462082034s)
preload_test.go:57: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-893168
preload_test.go:57: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-893168: (5.880508719s)
preload_test.go:65: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-893168 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E1108 09:06:43.666505    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
preload_test.go:65: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-893168 --memory=3072 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (48.500167078s)
preload_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-893168 image list
helpers_test.go:175: Cleaning up "test-preload-893168" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-893168
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-893168: (2.432093556s)
--- PASS: TestPreload (103.51s)

                                                
                                    
x
+
TestScheduledStopUnix (97.61s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-021785 --memory=3072 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-021785 --memory=3072 --driver=docker  --container-runtime=crio: (21.453822908s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-021785 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-021785 -n scheduled-stop-021785
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-021785 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1108 09:07:29.599832    9369 retry.go:31] will retry after 127.339µs: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.600998    9369 retry.go:31] will retry after 120.735µs: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.602121    9369 retry.go:31] will retry after 124.941µs: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.603243    9369 retry.go:31] will retry after 404.169µs: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.604328    9369 retry.go:31] will retry after 416.161µs: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.605439    9369 retry.go:31] will retry after 1.051555ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.606563    9369 retry.go:31] will retry after 1.374238ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.608754    9369 retry.go:31] will retry after 1.06686ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.609874    9369 retry.go:31] will retry after 1.314282ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.612068    9369 retry.go:31] will retry after 3.420901ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.616222    9369 retry.go:31] will retry after 8.644865ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.625431    9369 retry.go:31] will retry after 12.911485ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.638680    9369 retry.go:31] will retry after 8.23123ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.647980    9369 retry.go:31] will retry after 17.660075ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.666249    9369 retry.go:31] will retry after 37.210475ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
I1108 09:07:29.704516    9369 retry.go:31] will retry after 30.102805ms: open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/scheduled-stop-021785/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-021785 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-021785 -n scheduled-stop-021785
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-021785
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-021785 --schedule 15s
E1108 09:08:06.737415    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-021785
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-021785: exit status 7 (78.832685ms)

                                                
                                                
-- stdout --
	scheduled-stop-021785
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-021785 -n scheduled-stop-021785
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-021785 -n scheduled-stop-021785: exit status 7 (76.521014ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-021785" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-021785
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-021785: (4.674825569s)
--- PASS: TestScheduledStopUnix (97.61s)

                                                
                                    
x
+
TestInsufficientStorage (9.66s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-247779 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-247779 --memory=3072 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.185486562s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"3d513659-2a31-4f2f-a6d0-6946f74566eb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-247779] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"10770f91-63b5-4b59-8552-1111cee3297d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=21866"}}
	{"specversion":"1.0","id":"72552bd5-a43e-44e5-a2d7-aed3ba1bcca0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"fca22711-1270-4e8e-acb7-0cad490fa158","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig"}}
	{"specversion":"1.0","id":"7d23c182-7ba3-4265-896d-e2d1f8018593","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube"}}
	{"specversion":"1.0","id":"c08bf62b-9a95-4e2e-a387-28d03bd25b1a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"be7cfa43-a863-4928-85ef-3a9860a6dcde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"0a047052-eba4-43bc-978a-33330e4f086f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"9ad10f66-d91f-46c2-9593-49a587d255db","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"f59d1102-3a4f-483a-ac47-5bbd4762d718","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"3e4cdcc1-59d4-4b93-af51-0605a40c2b45","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"81d6e675-673f-49bf-aa4f-3732ade74121","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-247779\" primary control-plane node in \"insufficient-storage-247779\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"7124a3ba-be5e-4e4c-b4a5-b48403eaab4a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.48-1761985721-21837 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"65377d16-077e-4ed9-afff-9017d1839c0a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=3072MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"6e06471a-6621-49b0-8438-6ab054ce180f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-247779 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-247779 --output=json --layout=cluster: exit status 7 (281.873095ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247779","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=3072MB) ...","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247779","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 09:08:52.768538  177646 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247779" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-247779 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-247779 --output=json --layout=cluster: exit status 7 (279.050373ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-247779","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.37.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-247779","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1108 09:08:53.047721  177758 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-247779" does not appear in /home/jenkins/minikube-integration/21866-5860/kubeconfig
	E1108 09:08:53.057957  177758 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/insufficient-storage-247779/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-247779" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-247779
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-247779: (1.909987536s)
--- PASS: TestInsufficientStorage (9.66s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (49.57s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.32.0.4239479476 start -p running-upgrade-784389 --memory=3072 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.32.0.4239479476 start -p running-upgrade-784389 --memory=3072 --vm-driver=docker  --container-runtime=crio: (23.800737263s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-784389 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-784389 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (22.597037848s)
helpers_test.go:175: Cleaning up "running-upgrade-784389" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-784389
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-784389: (2.608739979s)
--- PASS: TestRunningBinaryUpgrade (49.57s)

                                                
                                    
x
+
TestKubernetesUpgrade (305.2s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (25.817301424s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-515251
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-515251: (2.332073279s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-515251 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-515251 status --format={{.Host}}: exit status 7 (109.983915ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m28.340353812s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-515251 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 106 (85.537694ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-515251] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.34.1 cluster to v1.28.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.28.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-515251
	    minikube start -p kubernetes-upgrade-515251 --kubernetes-version=v1.28.0
	    
	    2) Create a second cluster with Kubernetes 1.28.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5152512 --kubernetes-version=v1.28.0
	    
	    3) Use the existing cluster at version Kubernetes 1.34.1, by running:
	    
	    minikube start -p kubernetes-upgrade-515251 --kubernetes-version=v1.34.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-515251 --memory=3072 --kubernetes-version=v1.34.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (5.244420688s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-515251" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-515251
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-515251: (3.193505706s)
--- PASS: TestKubernetesUpgrade (305.20s)

                                                
                                    
x
+
TestMissingContainerUpgrade (103.66s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.32.0.4018941070 start -p missing-upgrade-811715 --memory=3072 --driver=docker  --container-runtime=crio
E1108 09:09:04.537422    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.32.0.4018941070 start -p missing-upgrade-811715 --memory=3072 --driver=docker  --container-runtime=crio: (40.178123193s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-811715
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-811715: (10.448760395s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-811715
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-811715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-811715 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (49.886172695s)
helpers_test.go:175: Cleaning up "missing-upgrade-811715" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-811715
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-811715: (2.559611742s)
--- PASS: TestMissingContainerUpgrade (103.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:85: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --kubernetes-version=v1.28.0 --driver=docker  --container-runtime=crio: exit status 14 (96.065811ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-845504] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.88s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:97: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-845504 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:97: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-845504 --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.516406321s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-845504 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:114: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:114: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (16.040368733s)
no_kubernetes_test.go:202: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-845504 status -o json
no_kubernetes_test.go:202: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-845504 status -o json: exit status 2 (294.2711ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-845504","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:126: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-845504
no_kubernetes_test.go:126: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-845504: (2.011746165s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (18.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-732849 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-732849 --memory=3072 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (181.140077ms)

                                                
                                                
-- stdout --
	* [false-732849] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=21866
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1108 09:09:38.334302  189750 out.go:360] Setting OutFile to fd 1 ...
	I1108 09:09:38.334596  189750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:38.334611  189750 out.go:374] Setting ErrFile to fd 2...
	I1108 09:09:38.334617  189750 out.go:408] TERM=,COLORTERM=, which probably does not support color
	I1108 09:09:38.334885  189750 root.go:338] Updating PATH: /home/jenkins/minikube-integration/21866-5860/.minikube/bin
	I1108 09:09:38.335484  189750 out.go:368] Setting JSON to false
	I1108 09:09:38.336641  189750 start.go:133] hostinfo: {"hostname":"ubuntu-20-agent-4","uptime":3129,"bootTime":1762589849,"procs":268,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"22.04","kernelVersion":"6.8.0-1043-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I1108 09:09:38.336728  189750 start.go:143] virtualization: kvm guest
	I1108 09:09:38.338077  189750 out.go:179] * [false-732849] minikube v1.37.0 on Ubuntu 22.04 (kvm/amd64)
	I1108 09:09:38.339805  189750 out.go:179]   - MINIKUBE_LOCATION=21866
	I1108 09:09:38.339802  189750 notify.go:221] Checking for updates...
	I1108 09:09:38.342888  189750 out.go:179]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1108 09:09:38.344452  189750 out.go:179]   - KUBECONFIG=/home/jenkins/minikube-integration/21866-5860/kubeconfig
	I1108 09:09:38.345679  189750 out.go:179]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/21866-5860/.minikube
	I1108 09:09:38.346861  189750 out.go:179]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I1108 09:09:38.348904  189750 out.go:179]   - MINIKUBE_FORCE_SYSTEMD=
	I1108 09:09:38.350535  189750 config.go:182] Loaded profile config "NoKubernetes-845504": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I1108 09:09:38.350661  189750 config.go:182] Loaded profile config "missing-upgrade-811715": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.28.3
	I1108 09:09:38.350784  189750 config.go:182] Loaded profile config "offline-crio-798164": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
	I1108 09:09:38.350911  189750 driver.go:422] Setting default libvirt URI to qemu:///system
	I1108 09:09:38.375993  189750 docker.go:124] docker version: linux-28.5.2:Docker Engine - Community
	I1108 09:09:38.376074  189750 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1108 09:09:38.443776  189750 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:false CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:67 OomKillDisable:false NGoroutines:79 SystemTime:2025-11-08 09:09:38.428125058 +0000 UTC LoggingDriver:json-file CgroupDriver:systemd NEventsListener:0 KernelVersion:6.8.0-1043-gcp OperatingSystem:Ubuntu 22.04.5 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[::1/128 127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33652076544 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent-4 Labels:[] ExperimentalBuild:false ServerVersion:28.5.2 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:442cb34bda9a6a0fed82a2ca7cade05c5c749582 Expected:} RuncCommit:{ID:v1.3.3-0-gd842d771 Expected:} InitCommit:{ID:de40ad0 Expected:} SecurityOptions:[name=apparmor name=seccomp,profile=builtin name=cgroupns] ProductLicense: Warnings:<nil> ServerErrors:[] ClientInfo:{Debug:false Plugins:[m
ap[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.29.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.40.3] map[Name:model Path:/usr/libexec/docker/cli-plugins/docker-model SchemaVersion:0.1.0 ShortDescription:Docker Model Runner Vendor:Docker Inc. Version:v1.0.0] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I1108 09:09:38.443909  189750 docker.go:319] overlay module found
	I1108 09:09:38.446102  189750 out.go:179] * Using the docker driver based on user configuration
	I1108 09:09:38.447566  189750 start.go:309] selected driver: docker
	I1108 09:09:38.447595  189750 start.go:930] validating driver "docker" against <nil>
	I1108 09:09:38.447611  189750 start.go:941] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1108 09:09:38.449697  189750 out.go:203] 
	W1108 09:09:38.451158  189750 out.go:285] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1108 09:09:38.452515  189750 out.go:203] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-732849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-845504
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-811715
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-crio-798164
contexts:
- context:
cluster: NoKubernetes-845504
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-845504
name: NoKubernetes-845504
- context:
cluster: missing-upgrade-811715
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-811715
name: missing-upgrade-811715
- context:
cluster: offline-crio-798164
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-798164
name: offline-crio-798164
current-context: offline-crio-798164
kind: Config
users:
- name: NoKubernetes-845504
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.key
- name: missing-upgrade-811715
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.key
- name: offline-crio-798164
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/offline-crio-798164/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/offline-crio-798164/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-732849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-732849"

                                                
                                                
----------------------- debugLogs end: false-732849 [took: 3.18479138s] --------------------------------
helpers_test.go:175: Cleaning up "false-732849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-732849
--- PASS: TestNetworkPlugins/group/false (3.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.8s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:138: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:138: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-845504 --no-kubernetes --memory=3072 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (7.801226983s)
--- PASS: TestNoKubernetes/serial/Start (7.80s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-845504 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-845504 "sudo systemctl is-active --quiet service kubelet": exit status 1 (272.74351ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (2s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:171: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:171: (dbg) Done: out/minikube-linux-amd64 profile list: (1.244546888s)
no_kubernetes_test.go:181: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (2.00s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:160: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-845504
no_kubernetes_test.go:160: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-845504: (1.277569701s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (11.33s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:193: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-845504 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:193: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-845504 --driver=docker  --container-runtime=crio: (11.333894433s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (11.33s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:149: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-845504 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:149: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-845504 "sudo systemctl is-active --quiet service kubelet": exit status 1 (305.693997ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.31s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.38s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (39.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.32.0.901063601 start -p stopped-upgrade-312782 --memory=3072 --vm-driver=docker  --container-runtime=crio
E1108 09:11:01.464137    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/addons-758852/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.32.0.901063601 start -p stopped-upgrade-312782 --memory=3072 --vm-driver=docker  --container-runtime=crio: (21.052024316s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.32.0.901063601 -p stopped-upgrade-312782 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.32.0.901063601 -p stopped-upgrade-312782 stop: (4.614117628s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-312782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-312782 --memory=3072 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (13.805518013s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (39.47s)

                                                
                                    
x
+
TestPause/serial/Start (42.5s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-322482 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-322482 --memory=3072 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (42.494964376s)
--- PASS: TestPause/serial/Start (42.50s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-312782
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.97s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (39.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1108 09:11:43.667039    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (39.593278454s)
--- PASS: TestNetworkPlugins/group/auto/Start (39.59s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (6.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-322482 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-322482 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (6.140919623s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (6.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (37.96s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (37.958375017s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (37.96s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-732849 "pgrep -a kubelet"
I1108 09:12:15.117669    9369 config.go:182] Loaded profile config "auto-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-732849 replace --force -f testdata/netcat-deployment.yaml
I1108 09:12:16.057650    9369 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 0 spec.replicas 1 status.replicas 0
I1108 09:12:16.059210    9369 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2dk8x" [e8f8329c-626e-4572-a9fe-3459dabe09b6] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2dk8x" [e8f8329c-626e-4572-a9fe-3459dabe09b6] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 9.004250618s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (51.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (51.266431243s)
--- PASS: TestNetworkPlugins/group/calico/Start (51.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:352: "kindnet-ntslg" [c962d55c-8ae9-4b25-8305-3f1d68646568] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003214743s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-732849 "pgrep -a kubelet"
I1108 09:12:55.385731    9369 config.go:182] Loaded profile config "kindnet-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (8.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-2m9j2" [f9554f36-371a-4d1c-b354-aa86f2100aaf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-2m9j2" [f9554f36-371a-4d1c-b354-aa86f2100aaf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 8.003917583s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (8.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (45.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (45.403074712s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (45.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:352: "calico-node-6g2qr" [2fefcdf5-2a05-4d27-a113-9582a468dd05] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.003437377s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-732849 "pgrep -a kubelet"
I1108 09:13:41.974208    9369 config.go:182] Loaded profile config "calico-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-zhbn7" [43c0c07f-8fc2-4654-bdfd-f97ce3ef8014] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-zhbn7" [43c0c07f-8fc2-4654-bdfd-f97ce3ef8014] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.004128233s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (70.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m10.495053474s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (70.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-732849 "pgrep -a kubelet"
I1108 09:14:10.855989    9369 config.go:182] Loaded profile config "custom-flannel-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-5tc7r" [db4dcd9e-78fb-40c2-8a58-e15ea3f47e95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-5tc7r" [db4dcd9e-78fb-40c2-8a58-e15ea3f47e95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 8.004084548s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (44.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (44.867229379s)
--- PASS: TestNetworkPlugins/group/flannel/Start (44.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (62.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-732849 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m2.389044871s)
--- PASS: TestNetworkPlugins/group/bridge/Start (62.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:352: "kube-flannel-ds-fvls6" [dd60fcda-aab9-4a97-b30b-fc70d4c57f49] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00450573s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-732849 "pgrep -a kubelet"
I1108 09:14:57.180547    9369 config.go:182] Loaded profile config "enable-default-cni-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-6cq8j" [12b3f2ca-a9d8-41c4-973f-f993fa1bb2d9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-6cq8j" [12b3f2ca-a9d8-41c4-973f-f993fa1bb2d9] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 8.004336232s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (8.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-732849 "pgrep -a kubelet"
I1108 09:15:03.001430    9369 config.go:182] Loaded profile config "flannel-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-7crv7" [f16b852d-acc7-4a15-9296-243b7cb2901e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-7crv7" [f16b852d-acc7-4a15-9296-243b7cb2901e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 8.004217108s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (8.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (52.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (52.22011298s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (52.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (57.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (57.612968606s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (57.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-732849 "pgrep -a kubelet"
I1108 09:15:42.310395    9369 config.go:182] Loaded profile config "bridge-732849": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.34.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-732849 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:352: "netcat-cd4db9dbf-fpjgb" [dc21267e-8008-4274-8f1d-48b45609caa2] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:352: "netcat-cd4db9dbf-fpjgb" [dc21267e-8008-4274-8f1d-48b45609caa2] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.003843031s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.14s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (45.144678604s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-732849 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-732849 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.75s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (39.747847542s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (39.75s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.25s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-339286 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [8691aea8-c976-4b06-9771-235555a5cebc] Pending
helpers_test.go:352: "busybox" [8691aea8-c976-4b06-9771-235555a5cebc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [8691aea8-c976-4b06-9771-235555a5cebc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.003468502s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-339286 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-271910 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [be77aed3-863e-433b-85af-7850d4a6cecd] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [be77aed3-863e-433b-85af-7850d4a6cecd] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003679964s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-271910 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.28s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (16.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-339286 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-339286 --alsologtostderr -v=3: (16.013436737s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (16.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-220714 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e] Pending
helpers_test.go:352: "busybox" [79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [79ac4ddd-dd20-4b0e-a64c-e6f9f768af4e] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.003776515s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-220714 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (16.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-271910 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-271910 --alsologtostderr -v=3: (16.306285591s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (16.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-220714 --alsologtostderr -v=3
E1108 09:16:43.667270    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/functional-096647/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-220714 --alsologtostderr -v=3: (16.34456739s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (16.34s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286: exit status 7 (82.176587ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-339286 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (51.68s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-339286 --memory=3072 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.28.0: (51.276901595s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-339286 -n old-k8s-version-339286
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (51.68s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:352: "busybox" [24063ace-e00f-4f59-99d7-9d633314fdbc] Pending
helpers_test.go:352: "busybox" [24063ace-e00f-4f59-99d7-9d633314fdbc] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:352: "busybox" [24063ace-e00f-4f59-99d7-9d633314fdbc] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.003858904s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910: exit status 7 (86.133983ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-271910 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (49.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-271910 --memory=3072 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (48.882864007s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-271910 -n embed-certs-271910
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (49.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714: exit status 7 (96.920035ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-220714 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (47.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-220714 --memory=3072 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (47.047282273s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-220714 -n no-preload-220714
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (47.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (17.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-677902 --alsologtostderr -v=3
E1108 09:17:15.972039    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:15.979137    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:15.990578    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:16.011981    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:16.053505    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:16.134955    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:16.296638    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:16.618271    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:17.263448    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:18.545072    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:21.107356    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-677902 --alsologtostderr -v=3: (17.339728656s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (17.34s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902: exit status 7 (79.244875ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-677902 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
E1108 09:17:26.229125    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
E1108 09:17:36.470451    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/auto-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-677902 --memory=3072 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (50.082187267s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-677902 -n default-k8s-diff-port-677902
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (50.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tt95r" [cb245aae-48cc-4ddb-bd6a-375932d5804e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003610531s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-8694d4445c-tt95r" [cb245aae-48cc-4ddb-bd6a-375932d5804e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003431427s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-339286 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2xrs" [1e9ce6e5-b160-47a2-a07c-4419790dd9e6] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003357454s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7gzf8" [05b359a5-3638-479c-941a-2786588dbb11] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003598669s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-339286 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-v2xrs" [1e9ce6e5-b160-47a2-a07c-4419790dd9e6] Running
E1108 09:17:50.390574    9369 cert_rotation.go:172] "Loading client cert failed" err="open /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/kindnet-732849/client.crt: no such file or directory" logger="tls-transport-cache.UnhandledError" key="key"
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004081513s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-220714 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-7gzf8" [05b359a5-3638-479c-941a-2786588dbb11] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003930999s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-271910 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-220714 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-271910 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (28.785340869s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tzbhn" [ac0f9c47-0b03-4970-aa59-3a5c15e3435d] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003293767s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:352: "kubernetes-dashboard-855c9754f9-tzbhn" [ac0f9c47-0b03-4970-aa59-3a5c15e3435d] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002776655s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-677902 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-677902 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-620528 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-620528 --alsologtostderr -v=3: (2.412868871s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (2.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528: exit status 7 (80.137034ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-620528 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (10.45s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-620528 --memory=3072 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.34.1: (10.129298105s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-620528 -n newest-cni-620528
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (10.45s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-620528 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20250512-df8de77b
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    

Test skip (27/327)

x
+
TestDownloadOnly/v1.28.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.0/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/cached-images
aaa_download_only_test.go:128: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.34.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/binaries
aaa_download_only_test.go:150: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.34.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.34.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.34.1/kubectl
aaa_download_only_test.go:166: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.34.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:763: skipping GCPAuth addon test until 'Permission "artifactregistry.repositories.downloadArtifacts" denied on resource "projects/k8s-minikube/locations/us/repositories/test-artifacts" (or it may not exist)' issue is resolved
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:483: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:37: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:101: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:478: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:565: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:82: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestISOImage (0s)

                                                
                                                
=== RUN   TestISOImage
iso_test.go:36: This test requires a VM driver
--- SKIP: TestISOImage (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:636: 
----------------------- debugLogs start: kubenet-732849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-845504
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-811715
contexts:
- context:
cluster: NoKubernetes-845504
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-845504
name: NoKubernetes-845504
- context:
cluster: missing-upgrade-811715
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-811715
name: missing-upgrade-811715
current-context: missing-upgrade-811715
kind: Config
users:
- name: NoKubernetes-845504
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.key
- name: missing-upgrade-811715
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-732849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-732849"

                                                
                                                
----------------------- debugLogs end: kubenet-732849 [took: 3.689823048s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-732849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-732849
--- SKIP: TestNetworkPlugins/group/kubenet (3.87s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:636: 
----------------------- debugLogs start: cilium-732849 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-732849" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.94.2:8443
name: NoKubernetes-845504
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.103.2:8443
name: missing-upgrade-811715
- cluster:
certificate-authority: /home/jenkins/minikube-integration/21866-5860/.minikube/ca.crt
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: cluster_info
server: https://192.168.76.2:8443
name: offline-crio-798164
contexts:
- context:
cluster: NoKubernetes-845504
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: NoKubernetes-845504
name: NoKubernetes-845504
- context:
cluster: missing-upgrade-811715
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:34 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: missing-upgrade-811715
name: missing-upgrade-811715
- context:
cluster: offline-crio-798164
extensions:
- extension:
last-update: Sat, 08 Nov 2025 09:09:38 UTC
provider: minikube.sigs.k8s.io
version: v1.37.0
name: context_info
namespace: default
user: offline-crio-798164
name: offline-crio-798164
current-context: offline-crio-798164
kind: Config
users:
- name: NoKubernetes-845504
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/NoKubernetes-845504/client.key
- name: missing-upgrade-811715
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/missing-upgrade-811715/client.key
- name: offline-crio-798164
user:
client-certificate: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/offline-crio-798164/client.crt
client-key: /home/jenkins/minikube-integration/21866-5860/.minikube/profiles/offline-crio-798164/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-732849

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-732849" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-732849"

                                                
                                                
----------------------- debugLogs end: cilium-732849 [took: 3.343960421s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-732849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-732849
--- SKIP: TestNetworkPlugins/group/cilium (3.51s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-010877" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-010877
--- SKIP: TestStartStop/group/disable-driver-mounts (0.33s)

                                                
                                    
Copied to clipboard